Post-Quantum, PQC, Quantum Security
Trending

The Signature Supply Chain: How Deep Does Digital Trust Go?

Every Signature Checked Out. Every Signature Was Forged.

A firmware update arrives at an industrial control system on a factory floor. The operator’s system runs through its verification chain. The code-signing certificate is valid, issued by a recognized intermediate CA, rooted in a trusted root certificate. The TLS connection that delivered the update was authenticated. The SBOM attestation accompanying the update carries a valid cryptographic signature. The container image that built the platform was signed and verified by the Kubernetes admission controller. Every check passes.

Every signature is forged.

This scenario is hypothetical today. No quantum computer can yet derive the private keys behind these signatures. But the attack surface this scenario exploits is not hypothetical. It is already built, already deployed, and already in production across every enterprise, every government, and every piece of critical infrastructure on earth. The chains of digital signatures that hold this infrastructure together run far deeper than most security teams realize, and migrating those chains to quantum-resistant algorithms is an engineering challenge of a fundamentally different character than deploying hybrid TLS.

Most PQC migration conversations focus on encryption. Harvest Now, Decrypt Later dominates the threat narratives, hybrid TLS key exchange dominates the implementation stories, and the impression left with many CISOs is that PQC migration is primarily a network-layer encryption exercise. That impression is dangerously incomplete. The signature infrastructure underneath every system you operate is larger, more complex, harder to migrate, and, when compromised, more destructive than the encryption layer that gets all the attention.

In a companion article on signature-first migration sequencing, I made the strategic argument for why PQC signature migration should be prioritized alongside or ahead of encryption migration. This article provides the technical map that argument rests on. It walks through every layer of the signature supply chain, shows what algorithms each layer uses today, explains what Shor’s algorithm breaks, and identifies the specific engineering constraints that make each layer difficult to migrate.

The goal is not to induce panic. It is to give CISOs, CTOs, and security architects a complete picture of what “signature infrastructure” actually means in a modern technology stack, so they can scope their migration programs accurately and avoid the unpleasant surprise of discovering, mid-migration, that the hard part is far larger than they planned for.

HNDL Gets the Headlines. TNFL Is the Structural Threat.

I will keep this section brief because the full argument lives in my Trust Now, Forge Later (TNFL) article and the companion piece on signature-first migration sequencing. But the framing matters for everything that follows.

Harvest Now, Decrypt Later (HNDL) is a confidentiality threat. Adversaries capture encrypted traffic today and store it, betting that a future cryptographically relevant quantum computer (CRQC) will allow them to decrypt it. The impact is retrospective and gradual. Each decrypted session reveals past communications. The damage is real but bounded.

TNFL is an integrity and authenticity threat. An adversary who can forge digital signatures gains the ability to take actions in the present: signing malicious software, impersonating trusted parties, fabricating identity credentials, and inserting themselves into supply chains. The impact is immediate and cascading. Q-Day for signatures will not feel like a gradual erosion of security. It will feel like a cliff.

Two recent papers have sharpened this argument with hard numbers. The Chevignard, Fouque, and Schrottenloher paper at EUROCRYPT 2026 showed that P-256 ECDSA, the dominant signature algorithm across WebPKI and code signing, requires only 1,193 logical qubits to break. The Google Quantum AI whitepaper demonstrated that breaking secp256k1 signatures is achievable with fewer than 500,000 physical superconducting qubits in about 9 minutes. ECC signatures fall to quantum attack with fewer resources than RSA encryption, confirming a thesis I have held since coining “Sign Today, Forge Tomorrow” in 2018.

The question this article answers is: if forged signatures are the bigger structural threat, exactly how many layers of signatures are we talking about? The answer, as every section below demonstrates, is far more than most organizations have mapped.

The Signature Stack: A Layer-by-Layer Anatomy

Modern systems do not rely on a single digital signature. They rely on interlocking chains of signatures that extend from the silicon in the processor through the firmware, the operating system, the application layer, the network, the cloud infrastructure, and the identity layer. Each link in this chain uses algorithms that Shor’s algorithm will break. Each link has its own migration constraints, its own ecosystem dependencies, and its own timeline.

What I tried to describe below is a systematic walk through each layer. For each one, I cover what it protects, what algorithms it uses today, what breaks under quantum attack, what makes migration difficult, and what is available now.

Hardware Root of Trust and Secure Boot

The lowest layer of the signature stack is etched into silicon. Every modern computing platform relies on a hardware root of trust that anchors all subsequent integrity verification.

Trusted Platform Modules (TPMs) contain endorsement keys, typically RSA-2048 or ECC P-256, that provide cryptographic proof of a device’s identity and integrity. These keys sign attestation reports that tell remote parties whether a system booted into a known-good state. UEFI Secure Boot relies on a signature verification chain that checks every piece of code executed during the boot process, from the initial firmware through the bootloader to the operating system kernel. Platform firmware itself carries vendor signatures, typically RSA-2048 or RSA-4096, that the hardware verifies before executing any code.

The quantum threat here is existential. A forged TPM endorsement key means the attestation reports that enterprise security tools rely on to verify platform integrity become meaningless. An attacker who can forge the platform firmware signing key can install persistent rootkits that survive operating system reinstallation, disk wipes, and even BIOS resets. This is the deepest possible level of compromise, and it is the hardest to detect and the hardest to remediate.

The migration challenge is equally fundamental: TPMs are hardware. You cannot patch the attestation key algorithm in a soldered chip. Devices with 10-20 year lifecycles that are shipping today with RSA-2048 or ECC P-256 endorsement keys will remain quantum-vulnerable for their entire operational life unless the device is physically replaced. The TCG (Trusted Computing Group) has begun work on PQC profiles for TPM 2.0, but the timeline from specification to silicon production to device deployment spans years. A TPM specification updated in 2027 might reach production servers in 2029 and enterprise laptops in 2030. Devices shipped before that window represent an irreducible quantum risk.

Secure Boot adds another dimension of complexity. The UEFI Secure Boot chain relies on a Platform Key (PK), Key Exchange Keys (KEK), and a database of authorized signatures (db). The PK is typically provisioned during manufacturing and cannot be easily changed afterward. If the PK uses RSA-2048 or ECDSA P-256, every boot verification on that device is anchored in a quantum-vulnerable key. Microsoft’s Secure Boot keys, which ship pre-provisioned on most PCs, are RSA-2048. Updating the Secure Boot trust anchor across the global PC fleet is a coordination challenge that makes WebPKI migration look simple by comparison.

For firmware signing, the picture is somewhat brighter. CNSA 2.0 has approved hash-based signatures (LMS and XMSS, per NIST SP 800-208) for firmware and software signing, and these are deployable today. Organizations that control their firmware signing infrastructure can begin using LMS/XMSS this quarter without waiting for any ecosystem coordination.

The practical action is clear: stop procuring devices with non-upgradeable signature verification, and begin signing firmware with LMS/XMSS wherever your HSMs and verification infrastructure support it.

Operating System and Kernel Integrity

One layer above hardware, the operating system’s own integrity depends on a web of signatures that most administrators rarely examine.

Linux kernel module signing uses RSA or ECDSA keys to verify that every loadable kernel module was built by a trusted source. A forged kernel module signing key would allow an attacker to load arbitrary code into the kernel itself, achieving root-level persistent compromise that is invisible to user-space monitoring tools. This is not a theoretical concern; kernel-level rootkits are among the most feared categories of malware precisely because they operate below the visibility of endpoint detection tools, anti-malware scanners, and security information and event management (SIEM) systems. Everything running in user space trusts the kernel implicitly. A compromised kernel can hide processes, intercept system calls, modify file contents transparently, and log every keystroke without leaving traces in any user-visible log.

Windows driver signing through Authenticode uses similar certificate-based verification, and Microsoft enforces increasingly strict driver signing requirements. Since Windows 10, all kernel-mode drivers submitted through the Windows Hardware Dev Center must be signed by Microsoft’s own certificate after attestation signing. This means the signing key for Microsoft’s driver attestation service is a single point of trust for every kernel-mode driver on every Windows system worldwide.

Linux package managers (RPM, DEB, APK) rely on GPG signatures to verify that every installed package comes from a trusted repository. The distribution’s GPG signing key protects the integrity of every package update for every user of that distribution. A forged Debian or Ubuntu signing key would allow an attacker to distribute malicious packages through the official repository infrastructure, compromising every system that runs apt update. macOS uses code signing with certificates issued by Apple’s developer program to verify application integrity.

Every one of these mechanisms uses RSA or ECDSA today. Every one of them is quantum-vulnerable.

The migration difficulty at this layer is moderate compared to hardware but significant in practice. Linux distributions would need to update their signing keys and modify their package verification toolchains. Windows Authenticode would need Microsoft to issue PQC code-signing certificates and update the driver signing infrastructure. The dependency chain extends through distribution maintainers, hardware vendors who ship signed drivers, and every software publisher who signs packages for distribution.

The timeline pressure comes from the fact that OS and kernel integrity protections are foundational. A compromised kernel is game over for every security control running above it. If the firmware layer is the foundation of a building, the kernel layer is the structural frame. Neither can be patched after the building is occupied without extraordinary effort.

Code Signing and Software Updates

This is the layer where the SolarWinds analogy hits hardest, and it is the layer that most CISOs think of first when they hear “signature compromise.”

Vendor code-signing certificates protect the integrity of software updates across every platform. Windows Authenticode, Apple codesign, Java JAR signing, and Android APK signing all verify that the software a user installs was produced by the claimed publisher and has not been tampered with. Auto-update mechanisms in every application, every device, and every IoT firmware over-the-air (OTA) system rely on signature verification to decide whether to install an update. App store signatures from Google Play and Apple’s App Store add another layer, with the platform itself re-signing applications before distribution.

The SolarWinds attack of 2020 required months of patient infiltration to compromise a build pipeline and sign malicious code with a legitimate key. The XZ Utils backdoor discovered in 2024 required years of social engineering, with a persistent contributor gradually building trust within the open-source community until they were granted commit access, at which point they inserted a subtle backdoor into a compression library used across virtually all Linux distributions. NotPetya in 2017 hijacked the M.E.Doc tax software update mechanism to distribute destructive malware, causing over $10 billion in damages globally. In every case, the attackers needed elaborate, expensive schemes to obtain the ability to sign malicious code. A CRQC eliminates all of that effort. The code-signing key’s public counterpart is, by definition, publicly available. Derive the private key, sign whatever you want, and push it through the legitimate distribution channel. The entire supply chain attack collapses to a mathematical operation.

What makes this layer particularly alarming is the scale of automated trust. Modern update mechanisms are designed to install signed updates with minimal or no human intervention. Windows Update, Chrome’s auto-update, iOS software updates, IoT firmware OTA systems, and enterprise software deployment tools all verify a signature and proceed to install. This automation is a feature, not a bug; manual approval of every update would be operationally unworkable. But it means that a single forged code-signing key can propagate malicious code across millions of devices without any human gatekeeper having an opportunity to intervene. The speed of propagation is determined by the update frequency, not by any human decision loop.

The migration status here is mixed. For code signing that you control internally (your own firmware, your own software builds), LMS/XMSS and ML-DSA (FIPS 204, formerly CRYSTALS-Dilithium) are available today. AWS KMS supports ML-DSA signing operations. AWS Private CA can issue ML-DSA code-signing certificates. For publicly distributed software that relies on platform-specific signing infrastructure (Authenticode, Apple codesign), the migration depends on platform vendors updating their signing infrastructure. Microsoft has announced PQC support in Windows Server 2025 and Windows 11, including ML-DSA in the CNG (Cryptography API: Next Generation) libraries. Apple’s PQC roadmap for code signing is less publicly detailed, though Apple deployed PQ3 for iMessage key exchange in 2024, signaling active internal PQC development. Android’s APK signing infrastructure will eventually need to support PQC, but Google has not published a specific timeline for APK signing migration distinct from their broader PQC efforts.

Container and Cloud-Native Integrity

The cloud-native ecosystem has built an entirely new layer of signature infrastructure over the past decade, and it is being built, right now, with quantum-vulnerable algorithms.

Container image signing via cosign (part of the Sigstore project) and Docker Content Trust (Notary v2) uses ECDSA or RSA to sign container images, ensuring that only verified images are deployed. SBOM (Software Bill of Materials) attestation and in-toto provenance frameworks use signatures to establish that an artifact was produced by a specific build pipeline and has not been modified. Kubernetes admission controllers can enforce signature verification policies, rejecting any pod whose container image does not carry a valid signature.

CI/CD pipeline integrity increasingly depends on signed commits (GPG signatures on git commits) and signed build artifacts. GitHub’s commit signing, for example, uses GPG keys that are typically RSA-2048 or ECC.

The irony of this layer is that it represents the most modern, most security-conscious part of the software supply chain, and it is being constructed entirely on cryptographic foundations that will not survive quantum computing. Organizations that are investing heavily in supply chain security through Sigstore, SLSA (Supply-chain Levels for Software Artifacts), and SBOM attestation are building their trust infrastructure on algorithms with a finite remaining life. The SLSA framework, which Google developed and the OpenSSF adopted, defines levels of supply chain integrity assurance. At SLSA Level 3 and above, build provenance must be cryptographically signed and non-forgeable. But “non-forgeable” assumes the signing algorithm resists key recovery, an assumption that breaks under quantum attack.

The migration path for this layer is actually among the more tractable ones, precisely because the tooling is newer and the teams building it are security-focused. Sigstore and cosign could add ML-DSA support without the backwards-compatibility constraints that burden legacy signing infrastructure. Git commit signing could transition from GPG-based RSA/ECDSA to PQC algorithms. The challenge is timing and prioritization: the cloud-native supply chain security community is focused (rightly) on preventing classical supply chain attacks today, and PQC readiness is not yet a top-of-mind priority for most of these projects. If these tools do not add PQC support before the quantum threat window opens, every supply-chain attestation made with them will become unverifiable, and the entire provenance chain that organizations have invested in building will lose its cryptographic foundation.

TLS Certificate Infrastructure (WebPKI)

The WebPKI is the most visible signature infrastructure on the internet and, paradoxically, one of the most structurally difficult to migrate.

A TLS certificate chain carries multiple signatures. The root CA signs the intermediate CA certificate. The intermediate CA signs the leaf (end-entity) certificate. The server presents this chain during the TLS handshake, and the client verifies each signature. But the signature dependency goes further. OCSP (Online Certificate Status Protocol) responses are signed by the CA, providing real-time certificate revocation status. Certificate Revocation Lists (CRLs) are signed. Certificate Transparency (CT) logs carry signed entries that provide public audit trails. Timestamping Authority (TSA) signatures bind proof-of-existence to a specific moment in time.

Today, the vast majority of this infrastructure runs on ECDSA P-256 for leaf certificates and RSA-2048 or RSA-4096 for root and intermediate CAs. Every one of these algorithms is broken by Shor’s algorithm, and the Chevignard et al. paper showed that breaking ECDSA P-256 requires only 1,193 logical qubits.

The migration difficulty for WebPKI is structural, not technical. ML-DSA (FIPS 204) exists and works. RFC 9881 defines the X.509 algorithm identifiers for ML-DSA. DigiCert, Sectigo, and Entrust offer PQC certificate pilot programs. The problem is coordination. The CA/Browser Forum must update its Baseline Requirements to permit ML-DSA in publicly trusted certificates. DigiCert has drafted a ballot for this, but as of mid-2026 it has not been adopted. Until the Baseline Requirements are updated, no publicly trusted CA can issue ML-DSA certificates for production WebPKI use.

Meanwhile, the certificate size problem looms. An ML-DSA-65 public key is 1,952 bytes and its signature is 3,309 bytes. An ECDSA P-256 public key is roughly 64 bytes and its signature roughly 64 bytes. A full TLS certificate chain using ML-DSA adds approximately 14.7 KB to the handshake, compared to about 3-5 KB for ECDSA. On a fast broadband connection, this is negligible. On mobile networks with 100-300 ms round-trip times and constrained bandwidth, the impact is measurable. For IoT devices communicating over cellular or LPWAN connections, it can be prohibitive. Certificate compression helps with the ASN.1 structural overhead but provides little benefit for the PQC keys and signatures themselves, which are high-entropy data that resists compression.

This is why Google and Cloudflare are developing Merkle Tree Certificates (MTCs), a fundamental redesign of the WebPKI authentication model that replaces per-certificate signatures with compact hash-based inclusion proofs. The IETF’s PLANTS working group is standardizing this approach, with Google and Cloudflare already running a Phase 1 feasibility study with approximately 1,000 TLS certificates as of early 2026. The MTC approach could reduce quantum-resistant authentication data to as little as 736 bytes, making PQC authentication potentially smaller than today’s classical certificate chains. Phase 2, targeted for Q1 2027, would invite existing CT Log operators to bootstrap the public MTC infrastructure. If MTCs succeed, they will define how billions of TLS connections authenticate in the post-quantum era. If they do not, the industry faces the prospect of significantly slower TLS handshakes for every web connection.

Certificate lifetimes add another variable. The CA/Browser Forum’s Ballot SC-081v3 is shrinking maximum TLS certificate validity from the current 398 days to 200 days (effective March 2026), then 100 days (March 2027), and ultimately 47 days by March 2029. Shorter certificate lifetimes mean more frequent issuance, which means paying the PQC overhead tax more often. But shorter lifetimes also reduce the exposure window if a CA key is compromised, making the overall system more resilient to key compromise. The tension between shorter lifetimes and larger certificates will define the operational character of post-quantum WebPKI.

The IETF LAMPS working group is also still finalizing composite certificate formats (draft-ietf-lamps-pq-composite-sigs, now at version 16 as of April 2026) that would allow hybrid certificates carrying both a classical and a PQC signature. These composite formats are designed for the transition period, allowing upgraded clients to verify the PQC signature while legacy clients fall back to the classical one.

For internal PKI, there is no reason to wait. AWS Private CA, DigiCert Trust Lifecycle Manager, and OpenSSL 3.5+ with the OQS provider all support ML-DSA certificate issuance today. Any organization that operates its own internal CA should be piloting ML-DSA certificates now.

API Authentication and Identity Tokens

If the WebPKI secures the network layer, identity tokens secure the application layer. In a world where identity is increasingly the security perimeter, the signatures on identity tokens are among the most consequential in the entire stack.

OAuth 2.0 access tokens are frequently implemented as JWTs (JSON Web Tokens) signed with RS256 (RSA with SHA-256) or ES256 (ECDSA with P-256). SAML assertions, the backbone of enterprise single sign-on, use XML signatures typically based on RSA. Mutual TLS (mTLS) uses client certificates signed with RSA or ECDSA. OpenID Connect ID tokens carry signatures that bind an authentication event to a specific identity. FIDO2/WebAuthn attestation, the modern standard for passwordless authentication, uses ECDSA P-256 as its primary attestation algorithm.

A forged identity provider signing key is the digital equivalent of a master key to every room in a building. An attacker who can forge the SAML IdP signing key for a large enterprise can impersonate any employee, at any access level, in any system that trusts that identity provider. The forged authentication token is cryptographically indistinguishable from a legitimate one. No monitoring tool will flag it. No audit log will reveal the impersonation, because the token passes every verification check.

The scale of this exposure is easy to underestimate. A large enterprise might have a single SAML IdP that federates authentication to 200+ SaaS applications, from email and collaboration tools to ERP systems, financial applications, and customer databases. The signing key for that IdP protects access to all of them. In a zero-trust architecture where every access decision is mediated by identity verification, the IdP signing key is the ultimate trust anchor. Compromising it does not give the attacker access to one system; it gives them access to everything, as anyone.

The OAuth/JWT ecosystem presents a related but differently shaped challenge. Modern microservices architectures use JWTs for service-to-service authentication, with each service verifying the token’s signature before granting access. A forged JWT signing key would allow an attacker to construct tokens that grant arbitrary access across the entire service mesh. In cloud-native environments where hundreds of microservices communicate through JWT-authenticated APIs, the blast radius of a compromised signing key extends across the full application layer.

FIDO2/WebAuthn attestation adds yet another dimension. The authenticator attestation key, typically ECDSA P-256, provides cryptographic proof that a specific hardware authenticator generated a credential. If an attacker can forge attestation signatures, they can fabricate credentials that appear to come from legitimate hardware authenticators, undermining the security guarantee that made passwordless authentication attractive in the first place.

The migration complexity at this layer is driven by ecosystem fragmentation. JWT libraries need to add ML-DSA support. SAML implementations need to support PQC XML signatures. FIDO2 authenticators (hardware security keys, platform authenticators) need firmware updates to support PQC attestation algorithms. Each of these is controlled by a different set of standards bodies, vendors, and open-source communities. The IANA COSE algorithm registry has been updated to include ML-DSA, but the propagation through the ecosystem of JWT libraries, identity providers, and relying parties will take years.

The critical action for CISOs is to identify which identity provider signing keys are the highest-value targets in their environment. If your enterprise uses a single SAML IdP that authenticates 50,000 users across hundreds of applications, the signing key for that IdP is one of the most consequential cryptographic assets in your organization. It should be among the first migrated.

Document and Transaction Signing

Some signatures are not ephemeral. They are designed to remain valid for years or decades, and this creates a category of migration challenge with no parallel on the encryption side.

The EU’s eIDAS regulation establishes a legal framework for qualified electronic signatures that carry the same legal weight as handwritten signatures. Financial transaction authorization signatures provide non-repudiation for banking and trading operations. Timestamping authorities provide signed proof that a document or transaction existed at a specific point in time. Long-term validation (LTV) mechanisms embed all the information needed to verify a signature far into the future, including certificates, OCSP responses, and CRL data.

The challenge: a document signed today with ECDSA must remain verifiable in 2045. If a CRQC exists by 2035, ECDSA signatures become forgeable, and every document signed with ECDSA before that date is retrospectively untrustworthy. The forensic question of whether a specific signature was created before or after a CRQC existed becomes unanswerable. Legal frameworks that depend on the integrity of digital signatures will face foundational challenges.

Consider the implications for specific industries. In pharmaceutical manufacturing, batch release records carry digital signatures that must remain verifiable for regulatory inspection at any point during the product’s shelf life and beyond. In real estate, digitally signed property transfer documents must remain legally binding for as long as the property exists. In financial services, trade execution records carry signatures that may be subject to regulatory review decades after the transaction. In each case, the signature provides the guarantee that the record has not been tampered with since creation. When that guarantee can be forged retroactively, the evidentiary value of the record collapses.

Timestamping authorities add another dimension. A TSA signature provides cryptographic proof that a document existed at a specific moment. If the TSA’s signing key is quantum-vulnerable, an attacker could fabricate timestamps, backdating forged documents to make them appear genuine. The integrity of the timestamping infrastructure is therefore critical to maintaining the evidentiary value of any long-lived signed artifact.

The practical response has two parts. First, any document or artifact signed today that must remain verifiable beyond 2035 should use PQC signatures now or be wrapped with a PQC countersignature and timestamp. Second, timestamping infrastructure itself must migrate to PQC, because a forged timestamp signing key would allow an attacker to backdate or fabricate proof of existence. Some organizations are beginning to explore “archive re-signing” strategies, where existing signed artifacts are periodically wrapped with fresh PQC countersignatures to extend their verifiable lifetime, but this approach is operationally complex and has not yet been standardized.

NIST’s IR 8547 proposes that quantum-vulnerable algorithms at 112-bit security (RSA-2048, ECC P-256) will be deprecated after 2030 and disallowed after 2035. For signatures on documents with multi-decade validity requirements, the 2030 deprecation date effectively means the transition should start now.

Embedded and OT/ICS Firmware: The Hardest Layer

At the intersection of digital signatures and physical consequences lies the embedded systems layer. This is where the migration is hardest, the timelines are longest, and the stakes are highest.

Industrial controller firmware signing protects the integrity of code running on PLCs, RTUs, and SCADA systems that manage power grids, water treatment facilities, manufacturing lines, and oil and gas infrastructure. Automotive ECU (Electronic Control Unit) code signing verifies the software in every major system of a modern vehicle, from engine management to braking to steering. Medical device firmware verification ensures that the software in pacemakers, insulin pumps, imaging equipment, and surgical robots has not been tampered with. Aviation and aerospace embedded systems use signed firmware for flight control computers, navigation systems, and engine management.

A forged firmware signing key for any of these categories does not produce a data breach. It produces the potential for physical harm. This is the territory I explored in Cyber-Kinetic Security: the boundary where digital integrity failures produce kinetic consequences. A forged firmware update to an industrial controller can cause equipment to operate outside safe parameters. A forged automotive ECU update can affect braking behavior. A forged medical device firmware update can alter drug dosing or diagnostic accuracy.

The migration constraints at this layer are severe. Devices with 15-25 year operational lifecycles are common in industrial environments. Many of these devices have no remote update capability; firmware updates require a physical visit and sometimes a production shutdown. Safety certification requirements (IEC 61508 for industrial safety, ISO 26262 for automotive, FDA regulation for medical devices) mean that any change to the firmware, including the signature verification algorithm, triggers a recertification process that can take months or years. Constrained compute environments in many embedded devices may not have sufficient processing power or memory for ML-DSA signature verification.

And these devices are being manufactured and deployed right now with ECDSA or RSA signature verification fused into hardware that cannot be updated. Every unit shipped today with a non-upgradeable classical-only signature verification path is a device that will become unsecurable once a CRQC exists. The replacement cost and operational disruption of swapping out tens of thousands of industrial controllers, medical devices, or automotive ECUs dwarfs any other component of a PQC migration program.

This is where Trust Now, Forge Later becomes a safety-of-life issue, and it is why I keep returning to the argument that signature migration cannot be deferred to “after we finish encryption.”

The Cascade Problem: Why One Forged Signature Breaks Everything

The layers described above do not exist independently. They form interlocking chains, and the compromise of a single layer cascades through every layer above it.

A forged root CA private key compromises every certificate ever issued under that root, every OCSP response signed by that CA, and every relying party that trusts that root. An attacker with a forged root CA key can issue valid-appearing certificates for any domain, enabling impersonation of any website, any API endpoint, and any service that relies on TLS certificate verification.

A forged code-signing key turns every auto-update channel that trusts that key into a malware distribution mechanism. If the compromised key belongs to a major software vendor, the blast radius can encompass millions of devices. SolarWinds reached 18,000 organizations through a single compromised signing key. A quantum-enabled code-signing forgery targeting a vendor with broader distribution, say an operating system vendor, a browser vendor, or a major cloud provider, could reach billions of devices.

But the cascade goes deeper than any single layer. Consider a multi-layer attack: an adversary with CRQC capability forges a root CA key to issue a valid-appearing TLS certificate for a software vendor’s update server. Simultaneously, they forge the vendor’s code-signing key to sign a malicious update. They host the malicious update on a server that presents the forged TLS certificate. The client system connects, verifies the (forged) TLS certificate, downloads the update, verifies the (forged) code signature, and installs the malware. Every verification check passes. No individual check was bypassed; every check was satisfied by a mathematically valid forgery. This is the scenario from the opening of this article, and every piece of infrastructure it requires already exists and is already in production.

A forged TPM endorsement key means that every subsequent integrity attestation from that device is suspect. Any security decision that relies on remote attestation (trusted computing, confidential computing, device health checks, conditional access policies) becomes unreliable. In a zero-trust architecture, where device health attestation is a prerequisite for network access, a compromised TPM attestation key undermines the entire access model.

A forged SAML IdP signing key means the attacker can impersonate any user in the organization, at any privilege level, without triggering any authentication alarm. The forged token passes every verification check because it carries a mathematically valid signature from a key that the relying party trusts.

The historical analogues reinforce this pattern. SolarWinds required months of careful infiltration to compromise a single build system and its signing key. The XZ Utils backdoor required years of social engineering to insert a backdoor into a core Linux library. NotPetya hijacked the M.E.Doc tax software update mechanism to distribute destructive malware across Ukraine and beyond. In every case, the hardest part of the attack was gaining access to the signing capability. A CRQC makes every signing key vulnerable simultaneously. The attack surface expands in every direction simultaneously, becoming universal. The elaborate infiltration operations that today’s supply chain attacks require would be replaced by a mathematical computation applied to any publicly available public key.

This is the shift that I believe most threat models have not yet absorbed. Today, we can assume that supply chain attacks are targeted, expensive, and rare. When a CRQC exists, the constraint that makes them rare, the difficulty of compromising signing keys, disappears. The CRQC Quantum Capability Framework I maintain tracks the engineering progress toward this capability across ten dimensions. The resource estimates from Gidney (2025) and the Google Quantum AI team show that the physical qubit counts are already within the range of credible hardware roadmaps.

What Your CBOM Reveals, and What It Misses

When you actually enumerate the signatures in a real production system, the count is staggering. This is something I have documented in detail through the Cryptographic Bill of Materials (CBOM) analyses I have published for real-world systems.

The CBOM for an Open RAN telecom deployment reveals that every O-RU, O-DU, O-CU, RIC, and SMO component uses certificate-based authentication, code signing, secure boot signatures, and IPsec/TLS with signature-based key exchange. A single telecom RAN deployment has hundreds of signature verification points, each of which must be migrated. The telecom case is particularly instructive because it combines all of the migration challenges discussed above: hardware-constrained devices (radio units with 15-year lifecycles), real-time performance requirements (radio protocols cannot tolerate latency spikes from larger PQC signatures), multi-vendor coordination (the O-RAN Alliance involves dozens of vendors whose equipment must interoperate), and regulatory compliance across multiple jurisdictions.

The cryptographic iceberg inside a mobile banking transaction documents over 320 cryptographic function calls before a user types a transfer amount. That number is not a theoretical analysis; it is a measured count from a real mobile banking session, encompassing secure boot of the mobile device, app store verification, TLS certificate chain verification for the banking API, OAuth token signing and verification, transaction signing, EMV cryptograms, and HSM key hierarchies on the bank’s side. Of those 320 operations, the majority involve signature verification at some layer of the stack. Migrating the mobile banking CBOM to PQC would require coordinating with the device manufacturer (for secure boot), the app store operator (for app signing), the bank’s TLS infrastructure, the identity provider, the payment processor, the card networks, and the HSM vendors. No single party controls the full chain.

The cryptography in interbank payments analysis reveals a comparable density of signature dependencies across SWIFT messaging, real-time gross settlement systems, and clearing house protocols. The cryptography in telecommunications and 5G analysis extends the picture to 5G-AKA authentication, base station integrity verification, and lawful intercept warrant authentication.

What CBOM analysis reveals, no other tool can. But what current CBOM tooling misses is equally important. Most discovery tools do a good job of finding network-layer cryptography: TLS certificates, IPsec configurations, SSH keys. They are much weaker at finding application-layer cryptography (JWT signing keys, SAML IdP keys, embedded signature verification in firmware), data-at-rest encryption configurations, and the signature dependencies buried in embedded firmware and OT/ICS devices. As I detailed in Rethinking CBOM and the CBOM deep dive, fewer than 5% of enterprises have completed a comprehensive cryptographic inventory. And even a complete inventory only tells you what. It does not tell you the migration difficulty of each dependency, the ecosystem coordination required, or the vendor dependencies that may block your timeline.

Meta’s PQC migration framework, published in April 2026, reinforces this point from a hyperscaler perspective. Meta’s team describes their migration as requiring comprehensive cryptographic inventory as a prerequisite, and their five-level maturity model (PQ-unaware through PQ-enabled) starts with the inventory phase. If a company operating at Meta’s scale of technical sophistication considers inventory the hard first step, most enterprises should expect even greater discovery challenges.

The quantum readiness cryptographic inventory guide I published provides the methodology for building a comprehensive inventory that includes signature dependencies alongside encryption ones. The key message: if your cryptographic inventory does not enumerate your code-signing keys, firmware signing infrastructure, CA dependencies, identity provider signing keys, and container image signing chains, it is incomplete in the areas that matter most for the TNFL threat.

Why Signature Migration Is Harder Than Encryption Migration

This is the analytical core that distinguishes a real migration program from a slide deck. There are seven concrete reasons why signature migration is structurally more difficult than encryption migration, and understanding them is the prerequisite for realistic program planning.

Encryption migration is forward-looking; signature migration is backwards-facing. Deploy hybrid TLS today and every new session is protected. Old sessions encrypted with classical algorithms remain exposed, but the forward protection is immediate. Signature migration has no equivalent. Existing signed firmware images remain in the field, signed with ECDSA or RSA, and they cannot be retroactively re-signed. Certificates already issued remain in use until they expire. Legal documents, contracts, and regulatory filings carry classical signatures that will remain quantum-vulnerable for as long as they exist. The signed artifacts of the past cannot be unwritten.

The certificate size problem is structural, not transitional. An ML-DSA-65 public key (1,952 bytes) and signature (3,309 bytes) are roughly 30-50 times larger than their ECDSA P-256 equivalents. This is not a temporary inconvenience; it is a permanent characteristic of lattice-based signatures. It affects TLS handshake latency (adding 10+ KB to each handshake), Certificate Transparency log storage, bandwidth-constrained IoT channels, and embedded systems with limited memory. The PQC signature size problem is why Google and Cloudflare are redesigning the WebPKI rather than dropping ML-DSA into existing X.509 infrastructure. It is why the IETF LAMPS working group is still iterating on composite certificate formats. And it is why SLH-DSA (FIPS 205, formerly SPHINCS+), the conservative hash-based alternative, is even worse on size, making it impractical for many certificate use cases despite its stronger security assumptions.

HSM support is incomplete. Organizations that require FIPS-validated cryptography (financial services, government, healthcare) face a dependency on their HSM vendors. AWS KMS supports ML-DSA. Thales, Utimaco, and SEALSQ are working toward FIPS 140-3 Level 3 validation with PQC algorithms, but not all have production-ready PQC signature support. If your code-signing or CA private keys live in an HSM that cannot generate or use ML-DSA keys, your migration is blocked at the hardware level. And the FIPS 140-2 sunset (September 21, 2026, when all remaining certificates move to Historical status) adds pressure: new HSM procurements must be FIPS 140-3, and those should be PQC-capable.

Embedded systems cannot be updated. This bears repeating because it is the single largest category of irreducible signature migration risk. A device with a 20-year lifecycle and a hardware-fused ECDSA verification key cannot be made quantum-safe without physical replacement. The cost of replacing millions of industrial controllers, medical devices, automotive ECUs, and building management systems will be measured in billions of dollars across industry.

The WebPKI coordination problem has no encryption parallel. Hybrid TLS key exchange is bilateral: you upgrade the client and the server, and the session is protected. WebPKI signature migration requires CAs, browsers, OS root stores, relying parties, and CT log operators to all support PQC signatures. A CA cannot issue ML-DSA certificates if browsers reject them. Browsers cannot accept them if no CA is issuing them. The CA/Browser Forum must update its Baseline Requirements. The IETF must finalize composite certificate formats. This coordination problem does not exist for key exchange and is the primary reason why the PQC community’s progress on key exchange is years ahead of its progress on signatures.

Long-lived signatures create an indefinite liability. A document signed under eIDAS, a timestamped financial record, a notarized contract, or a certified regulatory filing may need to remain verifiable for 20, 30, or 50 years. If the signature algorithm becomes forgeable before the document’s useful life expires, the legal and regulatory implications are profound. There is no encryption analogue to this problem. Encryption protects data at a moment in time; signatures establish trust for an indefinite duration.

Standards fragmentation compounds the engineering burden. ML-DSA (FIPS 204) is the primary NIST-standardized signature algorithm. SLH-DSA (FIPS 205) is the conservative hash-based alternative. FN-DSA (expected FIPS 206, formerly FALCON) is still in standardization and offers compact signatures but requires floating-point arithmetic that makes secure implementation difficult. LMS and XMSS (NIST SP 800-208) are already approved by CNSA 2.0 for firmware signing but are stateful, requiring careful key state management. Different use cases may require different algorithms. CISOs managing across jurisdictions will face additional fragmentation: the Netherlands requires specific parameter sets, China is developing its own PQC algorithms through the ICCS process, South Korea has its own KpqC competition, and some European regulators require hybrid configurations. The introduction to crypto-agility I published is not an aspirational future state; it is a survival requirement for organizations operating in multiple regulatory environments.

The 120,000-task PQC migration reference from an enterprise migration I led illustrates the full scale. The signature components of that migration consistently required the longest lead times, the most vendor coordination, and the most careful sequencing. The telecom PQC challenges article provides a sector-specific example of the same pattern.

What to Do Now: A Signature-First Action Plan

The analysis above can feel overwhelming, and that is precisely the reaction that leads organizations to defer the hard work in favor of the easy win of hybrid TLS. I want to offer a more productive response. These are the six highest-impact actions that CISOs and security architects can take now, prioritized by deployability and consequence.

First: inventory your signature dependencies before your encryption dependencies. Most cryptographic inventory efforts start with TLS endpoints and data-at-rest encryption because those are the easiest to discover with automated tools. Expand the scope immediately to include: code-signing certificates and keys, firmware signing infrastructure, CA dependencies (both internal and external), identity provider signing keys (SAML, OAuth, JWT), container image signing (cosign/Sigstore), SBOM attestation signing, and embedded device firmware signing keys. The PQC Readiness Self-Assessment Scorecard includes signature infrastructure in its scope, and the quantum readiness cryptographic inventory guide provides the methodology. Meta’s PQC migration framework, published in April 2026, offers additional practical guidance on cryptographic inventory building from a hyperscaler perspective.

Second: deploy LMS or XMSS for firmware and code signing now. This is the highest-confidence, lowest-risk action available. LMS and XMSS have been standardized since NIST SP 800-208 (2020). CNSA 2.0 approves them for code and firmware signing. The deployment is self-contained: you control the signing key and the verification infrastructure, so no ecosystem coordination is required. AWS KMS and AWS Private CA support ML-DSA signing operations. If you sign firmware or software updates, you can begin using quantum-resistant signatures this quarter.

Third: pilot ML-DSA certificates on internal PKI. Your internal certificate authority is not subject to CA/Browser Forum rules. You can issue ML-DSA certificates today for non-production systems, internal services, and test environments. This allows your team to discover integration issues, measure performance impacts from larger certificates, and build operational confidence before the public WebPKI is ready. DigiCert Trust Lifecycle Manager, AWS Private CA, and OpenSSL 3.5+ with the OQS provider all support this.

Fourth: demand PQC signature roadmaps from every vendor in your trust chain. Any vendor that signs firmware updates for devices in your environment, any CA you depend on for public or private certificates, any HSM vendor, and any cloud provider should have a documented PQC signature migration timeline. If they do not, that is a procurement risk. Make PQC signature capability a scored requirement in vendor evaluations and contract renewals.

Fifth: address long-lived signed documents now. Any document or artifact signed today that must remain verifiable past 2035 should either use PQC signatures now (if your tooling supports it) or be wrapped with a PQC countersignature and a PQC-signed timestamp. This is particularly critical for regulated industries where signed records have multi-decade retention requirements.

Sixth: stop deploying embedded systems with non-upgradeable signature verification. Any device being manufactured today with a hardware-fused RSA or ECDSA verification key and no mechanism for algorithm update is a device with a bounded operational life. Quantify the exposure, build the replacement timeline into your capital planning, and update your procurement requirements to mandate PQC-capable signature verification paths for all new devices.

The PQC Migration Framework at PQCFramework.com provides the structured methodology for executing this plan across all eight migration phases. Practical Steps to Quantum Readiness provides the entry points. Quantum Ready covers the organizational readiness strategy, including how to structure the business case for board-level approval.

The Regulatory Clock Is Already Running

The signature migration challenge is not just an engineering problem operating on an engineering timeline. It is constrained by regulatory deadlines that are already published and, in some cases, already binding.

NIST IR 8547 proposes that quantum-vulnerable algorithms at 112-bit security strength (RSA-2048, ECC P-256) will be deprecated after 2030 and disallowed after 2035. Algorithms at 128-bit security and above (RSA-3072, ECC P-384) will be disallowed after 2035 regardless of key size. This means that every signature algorithm currently protecting the infrastructure mapped in this article has a published sunset date.

CNSA 2.0 sets more aggressive timelines for national security systems: software and firmware signing should already support PQC (as of 2025), with full adoption by 2030. New NSS acquisitions must be CNSA 2.0-compliant by January 1, 2027. If you sell into government or defense markets, these timelines are procurement requirements today.

The EU coordinated PQC roadmap, published in June 2025, directs organizations to begin transitioning by end of 2026, secure high-risk systems by 2030, and complete full migration by 2035. The FIPS 140-2 sunset on September 21, 2026 means that all remaining FIPS 140-2 certificates move to Historical status; only FIPS 140-3 validated modules are acceptable for new procurements, and those should be PQC-capable.

As I have argued in Q-Day Predictions Are Irrelevant — Deadlines Are Set, the reason to act is no longer the prediction of when a CRQC will arrive. The deadlines are already published by regulators, insurers, and standards bodies. Those deadlines do not wait for the quantum computing community to agree on a timeline. They are binding, and the signature-related deadlines, particularly for code signing under CNSA 2.0, are the earliest ones on the calendar. The complete US PQC regulatory framework maps the full picture.

The Trust Infrastructure Is Already Built

The opening scenario of this article described a forged signature cascade. Every signature in the chain was forged, and every verification check passed. The scenario is hypothetical today because no CRQC yet exists. But the infrastructure the scenario targets is not hypothetical. It is in production, right now, in every enterprise, every government, and every critical infrastructure system on earth.

From the TPM attestation key in the silicon through the firmware signature, the OS kernel module signature, the application code signature, the container image signature, the TLS certificate chain, the identity provider token signature, and the document timestamp, modern systems depend on interlocking chains of digital signatures that run deeper than most organizations have ever mapped.

Each link uses algorithms that Shor’s algorithm will break. Each link has its own migration constraints, its own ecosystem dependencies, and its own timeline. The links closest to silicon are the hardest to replace and the longest-lived. The links in the application and cloud layer are more tractable but are being built right now with quantum-vulnerable cryptography, adding to the migration backlog daily.

The deadlines are already set. NIST IR 8547 targets deprecation of quantum-vulnerable algorithms by 2030 and disallowance by 2035. CNSA 2.0 requires PQC for firmware signing now and for all national security system categories by 2033-2035. The EU coordinated PQC roadmap directs organizations to begin transitioning by end of 2026. These deadlines are not waiting for Q-Day predictions to converge. They are not waiting for the community to agree on when a CRQC will arrive. They are here.

The signature supply chain is the trust infrastructure of the digital world. Mapping it, understanding it, and migrating it is the most consequential engineering challenge in the PQC transition. It is also the one that most migration programs have barely begun.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap