Crypto-Agility Is an Architecture Problem, Not a Library Swap
Table of Contents
The Most Repeated Advice in PQC Is the Least Understood
The phrase “crypto-agility” appears in every PQC migration guide published since NIST began its standardization process. It appears in vendor pitch decks, in government roadmaps, in consulting frameworks, in analyst reports, and in board-level risk summaries. NSA’s CNSA 2.0 advisory mentions it. NIST’s IR 8547 assumes it. The EU’s coordinated PQC roadmap requires it. At RSA 2026, it was inescapable.
It sounds like a solution. In practice, it is a description of a problem that most organizations have not begun to solve.
After leading PQC migrations involving 120,000+ tasks at Fortune Global 500 organizations, developing the open-source PQC Migration Framework, and publishing detailed Cryptographic Bills of Materials that map the real cryptographic complexity inside production systems, I can tell you exactly where crypto-agility breaks down. It is almost never in the cryptographic library.
The library is the easy part. OpenSSL 3.5 supports ML-DSA and ML-KEM through the OQS provider. BoringSSL, WolfSSL, and AWS-LC are all adding PQC algorithm support. The cryptographic primitives are available, performant, and standardized. If crypto-agility were a library problem, we would already be done.
The reason we are not done is that cryptographic choices in enterprise environments are not made in one place. They are embedded in hundreds of independent codebases, hard-coded into protocol specifications, fused into hardware that cannot be updated, baked into vendor dependencies you do not control, and scattered across configuration files that no one has inventoried. Changing the library is like changing the engine in a car that is welded to the chassis, bolted to a road, and connected to every other car on the highway by rigid steel bars. The engine swap is easy. Everything around it is the actual problem.
This article maps that problem in detail. It is written for CISOs, enterprise architects, and CTO-level leaders who have heard “be crypto-agile” repeated endlessly and want to know what it actually takes. I will be honest about how hard it is, because false optimism leads to bad planning. But the goal is not defeatism. It is to help you invest your architecture effort in the places where it will actually produce agility, rather than in the places where it feels productive but changes nothing.
At RSA 2026, the conversation around PQC and crypto-agility reached a new intensity. Practitioners from Deloitte, Accenture, Citi, and IBM shared field reports from active migration programs. The consistent theme: the algorithm migration itself is well-understood. The architectural challenge of making systems agile enough to accept the new algorithms is where programs stall. The gap between “we have a PQC algorithm” and “our systems can use it” is where years of work and millions of dollars live. This article maps that gap.
The Library Fallacy
When most people say “crypto-agility,” they mean something like this: your application uses AES-256 for encryption and RSA-2048 for key exchange. To become crypto-agile, you swap the RSA call for an ML-KEM call, update a configuration file, redeploy, and you are quantum-safe. Maybe you add an abstraction layer so the next swap is even easier.
This mental model is not wrong in the sense that it describes a real operation that must happen. It is wrong in the sense that it describes perhaps 5% of the actual work of a PQC migration, and it describes the easiest 5%.
The cryptographic library is the component with the best PQC support, the most active development, the most thorough testing, and the most straightforward upgrade path. If your entire migration consisted of updating library calls, PQC migration would be a routine maintenance exercise. A developer could complete it in a sprint.
I have seen this fallacy play out in executive briefings more times than I can count. A CISO hears “the new algorithms are standardized, the libraries support them, deploy hybrid TLS” and concludes that PQC migration is fundamentally a software update. The board hears “six-month project.” The team discovers, three months in, that the software update was the easy 5%, and the remaining 95% involves hardware procurement cycles, vendor negotiations, protocol standard revisions, embedded device replacement programs, and a multi-year certificate infrastructure overhaul. The six-month project becomes a three-year program with five times the original budget.
The other 95% of the work is everything the library touches. It is the protocols that negotiate which algorithms to use. It is the hardware security modules that enforce which algorithms are available. It is the certificate authorities that determine which algorithms can appear in certificates. It is the embedded devices that verify signatures using algorithms burned into silicon. It is the identity providers, payment processors, cloud platforms, and SaaS vendors whose cryptographic choices you depend on but do not control. It is the thousands of configuration files across hundreds of applications where someone, at some point, typed “RSA-2048” or “AES-256-GCM” or “ES256” and moved on.
An abstraction layer helps if your entire stack uses it. In most enterprises, the stack was not built with a single abstraction layer. It was built over 20 years by dozens of teams using different languages, different frameworks, different libraries, and different ideas about where cryptographic choices should be made. The SHA-1 to SHA-2 migration is the closest historical precedent, and it took the industry over five years despite SHA-2 being a drop-in replacement at the library level. The PQC migration is orders of magnitude more complex: new key sizes, new certificate formats, new protocol handshake behaviors, new HSM capabilities, and entirely new algorithm families rather than parameter updates within a single family.
The abstraction layer that would have made crypto-agility easy does not exist in most organizations. Building it now is itself a multi-year architecture program. And that architecture program is, I would argue, more valuable than the algorithm migration itself, because it solves not only the PQC migration but every future cryptographic transition.
This is what I call the library fallacy: the belief that because the library problem is solvable, the crypto-agility problem is solvable at similar cost and speed. It is not. The library problem is the trivial case of a much harder architectural challenge.
Where Crypto-Agility Actually Breaks Down
The six categories below are not theoretical concerns. They are the actual blockers I have encountered in real enterprise PQC migrations. Each one represents a class of architectural dependency where the conventional advice to “be crypto-agile” collides with physical, contractual, or ecosystem constraints that cannot be resolved by updating a library.
HSMs with Non-Upgradeable Cryptographic Implementations
Hardware Security Modules are the cryptographic workhorses of financial services, government, and any organization that takes key management seriously. They protect the private keys for code signing, certificate issuance, payment processing, database encryption, and dozens of other critical operations. They are also, in many deployed configurations, the single hardest component to make crypto-agile.
Many HSMs in production today support a fixed set of algorithms implemented in dedicated hardware. The algorithm set was determined when the HSM was designed and manufactured. Firmware updates can add new algorithms in some models, but others have their cryptographic implementations in hardware logic that cannot be changed. Even when firmware updates are possible, applying them in a FIPS-validated environment is not a simple operation. FIPS 140-3 validation is algorithm-specific: an HSM validated for RSA-2048 and AES-256 is not automatically validated for ML-KEM or ML-DSA after a firmware update. The vendor must submit the updated firmware for a new validation, a process that can take 12-18 months. Organizations that require FIPS-validated cryptography (and most financial services and government organizations do) face a chicken-and-egg problem: they cannot use PQC algorithms until their HSMs are validated for them, but the validation process depends on vendor timelines they do not control.
HSM replacement cycles compound the challenge. HSMs are expensive, and organizations amortize them over 5-10 year lifecycles. Replacing an HSM is not a swap-and-restart operation; it involves key ceremony (the formal, audited, multi-person process of generating and provisioning new master keys), re-keying of every key hierarchy that depends on the HSM, and operational disruption during the transition. For organizations with hundreds of HSMs across multiple data centers, this is a multi-year capital and operational project.
The cryptographic iceberg inside a mobile banking transaction that I published illustrates this dependency concretely. The HSM sits at the center of the payment processing chain, protecting the keys that generate EMV cryptograms, validate transaction signatures, and manage the key hierarchy for card network interactions. The HSM’s algorithm support constrains the entire chain. Upstream and downstream systems can be as crypto-agile as they want; if the HSM in the middle cannot process ML-DSA signatures, the chain cannot migrate.
Protocols with Hard-Coded Algorithm Identifiers
TLS 1.3 is the gold standard for protocol-level crypto-agility. The cipher suite negotiation mechanism allows client and server to agree on the strongest mutually supported algorithm set, and adding new algorithms is a matter of registering new cipher suite identifiers with IANA. If every protocol worked like TLS 1.3, crypto-agility at the protocol layer would be solved.
Most protocols do not work like TLS 1.3.
EMV payment protocols, which govern the communication between payment terminals, card chips, and issuing banks, use fixed algorithm identifiers in the terminal-CA certificate chain. The algorithm is not negotiated; it is specified by the card scheme (Visa, Mastercard, JCB) and implemented in every terminal and card chip. Changing the algorithm requires updating the card scheme specification, issuing new CA certificates, updating every payment terminal, and replacing every card in circulation. This is a global ecosystem coordination problem that involves billions of physical devices.
Legacy VPN configurations running IKEv1 (still in production at many organizations despite IKEv2 being available for over 15 years) use fixed transform sets that specify exact algorithm combinations. IKEv2 improved this with more flexible negotiation, but many enterprises have not migrated from IKEv1, and the VPN appliances running it may not support IKEv2 without hardware replacement. The installed base of IKEv1-only VPN concentrators in government and financial services is substantial, and their replacement cycle is governed by capital budgets and contract terms, not by cryptographic urgency.
Industrial protocols present some of the most rigid algorithm constraints. OPC UA, the dominant protocol for industrial automation interoperability, defines security profiles that specify exact algorithm combinations. Moving to a new algorithm requires a new security profile definition from the OPC Foundation, implementation by every OPC UA vendor, and adoption by every industrial operator. DNP3 Secure Authentication, used widely in electric utility SCADA systems, uses HMAC-SHA-256 as a fixed authentication mechanism. While HMAC-SHA-256 is not directly vulnerable to quantum attack (Grover’s algorithm provides only a quadratic speedup against symmetric primitives), the broader DNP3 security architecture’s reliance on RSA or ECC for key exchange is quantum-vulnerable, and updating it requires coordination across the North American utility industry. Modbus/TCP security extensions similarly specify fixed algorithm profiles. When the algorithm identifier is a fixed field in a binary protocol specification, “agility” requires rewriting the protocol standard, updating every implementation, coordinating rollout across an entire ecosystem of vendors and operators, and maintaining backward compatibility with devices that cannot be updated. This is not a configuration change. It is a multi-year standards body effort.
The lesson for enterprise architects: any protocol in your environment that does not support algorithm negotiation should be treated as technical debt. It is a point where crypto-agility is structurally impossible until the protocol itself is revised.
Certificate Authorities and Trust Chain Dependencies
Your organization’s crypto-agility is constrained by the crypto-agility of every certificate authority you depend on. If your TLS certificates come from a public CA that does not yet issue ML-DSA certificates, your ability to migrate your TLS signature infrastructure is blocked regardless of how agile your own systems are.
The CA/Browser Forum Baseline Requirements currently do not permit ML-DSA in publicly trusted TLS certificates. DigiCert has drafted a ballot to change this, but the ballot process takes time, and the Baseline Requirements must be adopted by browser root programs (Google, Apple, Mozilla, Microsoft) before CAs can act. Until this ecosystem alignment is complete, no publicly trusted CA can issue ML-DSA certificates for production use. Organizations that depend on public CAs have no control over this timeline.
Hybrid certificates (carrying both a classical and a PQC signature) are the intended bridge, but the IETF LAMPS composite certificate formats remain in draft (version 16 as of April 2026). Early adopters who deploy hybrid certificates risk format changes that require re-issuance. The composite certificate approach also increases certificate size substantially, compounding the bandwidth and storage challenges that already concern operators of high-traffic TLS infrastructure. Some organizations are exploring dual certificate deployments (separate classical and PQC certificates presented to different clients based on capability) as an alternative, but this approach doubles the certificate management burden and introduces its own operational complexity.
For internal PKI, the picture is better. If you operate your own CA, you control the algorithm selection and can begin issuing ML-DSA certificates today using platforms like AWS Private CA, DigiCert Trust Lifecycle Manager, or OpenSSL 3.5+ with the OQS provider. This is one of the highest-value crypto-agility investments an organization can make: moving internal PKI to a platform that supports multiple algorithm families and can issue certificates with PQC algorithms on demand, driven by policy rather than code changes.
The deeper architectural lesson is about trust chain rigidity. Re-rooting a trust chain (changing the algorithm of a root CA) requires every relying party to update its trust store. For a public CA, that means billions of devices. For an enterprise CA, it means every server, every client, every device, and every application that validates certificates against that root. This is an inherently non-agile operation, and no amount of library-level abstraction changes it.
Embedded Systems with Multi-Decade Lifecycles
A brand-new industrial controller installed in a power substation in 2015 may operate until 2035 or 2040. A medical imaging system deployed in a hospital in 2018 may run until 2033. An automotive ECU manufactured in 2024 will be in vehicles for 15-20 years. A satellite launched in 2023 will operate for its designed mission life with no possibility of physical hardware changes.
These devices were not designed to be crypto-agile because, at the time they were designed, crypto-agility was not a procurement requirement. Many have signature verification keys stored in one-time-programmable memory or hardware security modules that cannot be updated. The firmware verification process checks signatures against a fixed public key. Changing that key, or changing the algorithm the key uses, is physically impossible without replacing the device.
The scale of the non-agile installed base is enormous. The global installed base of industrial PLCs and RTUs numbers in the tens of millions. The automotive industry produces roughly 80 million vehicles per year, each containing dozens to hundreds of ECUs. The medical device installed base includes millions of units with multi-decade operational lives. The aerospace and defense sector deploys avionics and satellite systems designed for 20-30 year operational life with no possibility of physical access for hardware modification.
“New designs should be crypto-agile” is correct advice for devices being designed today. But it does nothing for the installed base, and the installed base will be in operation for decades. The gap between the crypto-agile future and the non-agile present will persist for 15-25 years in industrial environments, and longer in aerospace and defense. I detailed the specific challenges for the telecom sector in Telecom PQC Challenges, and the CBOM for Open RAN shows the density of cryptographic dependencies in modern telecom infrastructure, including radio units with decade-plus lifecycles and constrained compute environments that may not support PQC algorithm verification.
For any organization that operates embedded devices with long lifecycles, the crypto-agility conversation must include an honest assessment of the non-agile tail. How many devices in your fleet cannot be updated? What is the expected remaining operational life of each? What is the cost and operational disruption of replacing them? What interim risk mitigations (network segmentation, monitoring, restricted access) can contain the risk while the non-agile devices remain in service? These are not comfortable questions, but they are the real questions that crypto-agility planning must address.
Third-Party Dependencies and Vendor Ecosystems
Your organization’s crypto-agility is bounded by your least agile vendor. This is not an abstract principle; it is an operational reality that dominates the timeline of every enterprise PQC migration I have led.
Consider the dependency chain for a typical enterprise application. The application runs on a cloud provider’s infrastructure (AWS, Azure, GCP), uses a managed database (with encryption provided by the cloud provider’s KMS), authenticates users through a SaaS identity provider (Okta, Azure AD, Ping), processes payments through a payment gateway (Stripe, Adyen, Worldpay), stores documents in a SaaS content management system, and sends email through a managed email platform. Each of these providers makes its own cryptographic choices, on its own timeline, according to its own assessment of quantum risk.
If your identity provider does not support PQC signing for SAML or OAuth tokens, your authentication infrastructure cannot migrate regardless of what you do internally. If your payment processor does not support PQC in its terminal-to-gateway protocol, your payment chain cannot migrate. If your cloud KMS does not support ML-KEM for key wrapping, your data-at-rest encryption cannot migrate.
Meta’s April 2026 PQC migration framework paper acknowledged this challenge directly, noting that building a comprehensive cryptographic inventory is “inherently challenging” in large infrastructures. Their five-level maturity model (from PQ-unaware to PQ-enabled) places cryptographic inventory as the foundation, and Meta’s engineers emphasized that even at their scale of technical sophistication, the inventory phase was a significant undertaking. For organizations without Meta’s engineering resources, the challenge is proportionally greater.
Industry surveys consistently show that a majority of organizations expect their vendors to handle the PQC migration for them. This expectation is not well-founded. Most vendors are working on PQC support, but their timelines are driven by their own product roadmaps, their own FIPS certification schedules, and their own assessments of customer demand, not by your migration timeline. The Thales 2026 Data Threat Report found that a large proportion of organizations identify harvest-now-decrypt-later as their primary quantum-related risk, yet far fewer have begun concrete migration steps. The gap between awareness and action is widest at the vendor dependency layer, where organizations assume their suppliers are handling the problem.
The architectural implication: reduce the number of places where you depend on external cryptographic choices. Centralize TLS termination so you control the algorithm negotiation. Use your own KMS rather than relying on a SaaS provider’s key management. Operate your own internal CA rather than depending solely on public CAs. Each point of centralization is a point where you control the agility, rather than inheriting someone else’s rigidity.
The Configuration vs. Code Problem
True crypto-agility means algorithm selection is driven by configuration or policy, not by code. In an ideally crypto-agile system, an administrator changes a policy setting (from “RSA-2048” to “ML-KEM-768”), and every cryptographic operation across the system follows the new policy. No code changes, no recompilation, no redeployment.
In most enterprise applications, this is not how algorithm selection works. Algorithm choices are hard-coded in dozens of forms: specific cipher suite strings in application configuration files, specific key types in database connection strings, specific signing algorithms in CI/CD pipeline definitions, specific JWT algorithm identifiers in authentication middleware, specific certificate types in load balancer configurations. Each hard-coded choice is a point where agility fails.
The 120,000-task migration I led at enterprise scale illustrates the magnitude of this problem. A significant portion of those tasks were finding and updating hard-coded algorithm selections scattered across thousands of codebases, configuration files, infrastructure-as-code templates, CI/CD pipeline definitions, and vendor integration specifications. Each one required identifying the current algorithm choice, determining the correct replacement, testing the change in a non-production environment, and coordinating the deployment. The discovery phase alone, just finding all the places where algorithm choices were hard-coded, consumed months. At scale, this is not a technology project. It is a logistics operation with a discovery problem at its core.
The infrastructure-as-code dimension deserves special attention. Modern enterprises define their infrastructure through Terraform modules, CloudFormation templates, Kubernetes manifests, and Ansible playbooks. These files routinely contain hard-coded cipher suite strings, certificate types, key sizes, and algorithm identifiers. A Terraform module that provisions a load balancer with a specific TLS policy, or a Kubernetes manifest that specifies a certificate type for a service mesh, represents a hard-coded algorithm choice that must be discovered, evaluated, and updated. Because these files are version-controlled and deployed through CI/CD pipelines, the change management process is well-defined. But the discovery process (finding every template that contains an algorithm-specific configuration) is not, and the sheer volume in a large enterprise can be overwhelming.
The legacy codebase dimension is particularly painful. An application written in 2012 that uses a now-deprecated cryptographic API to make RSA calls does not become crypto-agile by updating the cryptographic library. The library may support ML-KEM, but the application code calls RSA-specific functions with RSA-specific parameters. Making that application crypto-agile requires refactoring it to use a generic cryptographic API, which in a mature codebase with years of accumulated dependencies can be a substantial engineering effort. Multiply that by the hundreds of applications in a typical enterprise portfolio, and the scale of the configuration-vs-code problem becomes clear.
What Real Crypto-Agility Looks Like
The previous section established what does not work. This section describes the architectural properties that actually enable crypto-agility, based on what I have seen succeed in real migrations.
Algorithm abstraction at the platform level, not the application level. The single most effective architectural decision for crypto-agility is centralizing algorithm selection. Applications should call generic operations: sign, verify, encrypt, decrypt, key-exchange. The platform underneath (a key management service, a PKI platform, a TLS termination point, a signing service) selects the algorithm based on policy. If 500 applications each make their own algorithm choices, you have 500 migration projects. If a central platform makes the choice, you have one. The applications consume a service; the service implements the cryptography. Algorithm rotation becomes a platform operation, not an application-by-application slog.
This is not a new idea. AWS KMS, Azure Key Vault, and Google Cloud KMS all implement this pattern for key management. The challenge is that many organizations use these services for some operations but bypass them for others. Every direct cryptographic library call that bypasses the centralized service is a point of agility failure. The architectural discipline required is: no application makes its own algorithm choices. Every cryptographic operation goes through the platform.
Centralized key and certificate management. If keys are managed centrally, algorithm rotation is a platform operation. If keys are scattered across application configurations, environment variables, CI/CD secrets, and vendor integrations, algorithm rotation requires touching every one of them. The architectural investment in centralized key and certificate lifecycle management (through tools like HashiCorp Vault, cloud KMS services, or enterprise PKI platforms) pays dividends far beyond PQC migration. It is the infrastructure that makes every future cryptographic change operationally tractable.
The certificate lifecycle dimension is equally important. Organizations that still manage certificates through spreadsheets, manual renewal processes, and ad hoc ticketing systems will find PQC certificate migration unmanageable at scale, particularly as certificate lifetimes shrink (the CA/Browser Forum is reducing maximum TLS certificate validity to 200 days in 2026 and ultimately 47 days by 2029). Automated certificate lifecycle management, through ACME or similar protocols, integrated with a crypto-agile CA platform, is a prerequisite for sustainable PQC certificate operations.
Protocol negotiation as a design requirement. TLS 1.3’s cipher suite negotiation should be the model for every protocol in your environment. Any protocol that does not support algorithm negotiation should be flagged as technical debt. For internal protocols (microservice-to-microservice communication, internal API authentication, inter-system messaging), this is within your control. Design new internal protocols with algorithm negotiation from the start; retrofit existing ones during their normal modernization cycle. For external protocols (payment networks, industrial standards, inter-organizational data exchange), this is a standards body and vendor dependency, but it should inform your procurement and standards engagement strategy. Actively participate in the standards bodies that govern the protocols you depend on, because waiting for someone else to add algorithm negotiation means waiting on someone else’s timeline.
Hardware with firmware-upgradeable cryptographic implementations. This is a procurement requirement, not a design preference. Every HSM purchased from this point forward should contractually support PQC algorithm updates via firmware, with the vendor committing to FIPS 140-3 validation for those updates. Every secure element, every TPM, every hardware token should support algorithm update paths. The FIPS 140-2 sunset on September 21, 2026 creates a natural procurement boundary: new HSM acquisitions must be FIPS 140-3, and that validation should include PQC algorithms or a committed roadmap for adding them.
Separation of data format from cryptographic binding. When a signed document format hard-codes “RSA-SHA256” in its signature metadata, the format itself resists agility. CMS/PKCS#7, JSON Web Signature (JWS), and XML Signature all carry algorithm identifiers as part of the signed data, which means the format supports algorithm variation. But many application-specific formats do not. Any data format in your environment where the signature algorithm is implicit rather than explicitly identified should be treated as a migration obstacle.
Cryptographic inventory as a continuous practice. The cryptographic inventory is never “done.” New applications, new vendor integrations, new cloud services, and new infrastructure deployments introduce new cryptographic choices continuously. Automated discovery must run continuously, feeding a living CBOM that informs architecture decisions rather than gathering dust in a compliance folder. As I argued in Rethinking CBOM, the CBOM should be an operational tool that feeds your agility posture, not a document produced once for an auditor and then forgotten.
The introduction to crypto-agility I published covers the concept at a foundational level. Everything in this section builds on those foundations with the operational specificity that comes from having executed this at scale.
The Standards Fragmentation Force Multiplier
Crypto-agility matters beyond the quantum threat. The coming fragmentation of post-quantum cryptographic standards across jurisdictions transforms crypto-agility from a forward-looking best practice into an immediate operational requirement for any organization that operates across borders.
NIST has standardized ML-KEM (FIPS 203), ML-DSA (FIPS 204), and SLH-DSA (FIPS 205), with FN-DSA (FIPS 206) expected. These are the standards that the United States and its closest allies will adopt. But they are not the only standards that will matter.
China has launched an independent process through the ICCS (International Cryptographic Competition and Standardization) to develop and standardize its own post-quantum algorithms. These will not be the same algorithms as NIST’s selections, and the motivations extend beyond technical considerations into sovereignty and trust. Any organization with operations in China, or that processes data subject to Chinese cybersecurity law, will need to support China’s PQC algorithms alongside NIST’s. This is not a hypothetical future requirement; China’s cybersecurity regulations already mandate domestic cryptographic standards for certain categories of data and infrastructure.
Russia continues to develop GOST-family cryptographic standards that diverge from Western selections, including the Shipovnik (Rosehip) algorithm family. South Korea’s KpqC competition has produced algorithms with no counterpart in the NIST selection. The Netherlands and other European countries have recommended higher security parameters than NIST defaults, sometimes combined with algorithms NIST did not select. The Netherlands’ NCSC recommends ML-KEM-1024 combined with FrodoKEM and classical ECC in a layered configuration, reflecting a hedging strategy that no single algorithm family is trusted absolutely. France’s ANSSI and Germany’s BSI have published similar guidance recommending hybrid configurations.
For a multinational enterprise, this fragmentation means that a single-algorithm migration plan is insufficient. Consider a financial institution with operations in the US, EU, China, and South Korea. Its American operations will follow NIST standards, governed by CNSA 2.0 and the US PQC regulatory framework. Its European operations may need hybrid configurations following BSI/ANSSI guidance. Its Chinese operations will need ICCS algorithms. Its South Korean operations may need KpqC algorithms. Each algorithm family requires its own implementation, its own testing, its own key management, and its own operational procedures.
A CISO managing this environment without crypto-agility would face four separate migration programs, each requiring deep integration effort. A CISO whose architecture is genuinely crypto-agile would face one architecture that supports four algorithm families through policy-driven configuration. The difference between those two futures is the investment made today.
An organization that can only run one algorithm family is an organization that will face a second migration when a jurisdiction mandates a different one. As I detailed in Quantum Sovereignty, the assumption that the world will converge on a single set of post-quantum standards is not supported by the evidence. Crypto-agility is the hedge against this fragmentation, and it is the only way to avoid a perpetual cycle of migration programs, each one as painful as the first.
This is also why I argue in Sovereignty in the PQC Era that crypto-agility should be a national security requirement, not an optional architectural feature. A nation’s critical infrastructure that can only run one algorithm family is a nation that can be disrupted by the failure or compromise of that algorithm. The diversification of cryptographic capability is a sovereignty issue.
What to Do Now: Architecture Decisions That Enable Agility
The gap between “be crypto-agile” and actually being crypto-agile is an architecture gap. Closing it requires specific, concrete decisions. Here are the six that will produce the most impact.
First: audit every point where an algorithm is selected. This is more specific than “do a cryptographic inventory.” A cryptographic inventory tells you what algorithms are in use. An algorithm selection audit tells you where the choice is made: in code, in a configuration file, in a protocol specification, in hardware, or in a vendor dependency. Each category has a radically different migration path. Code choices can be refactored. Configuration choices can be centralized. Protocol choices require standards body engagement. Hardware choices require procurement changes. Vendor choices require contract negotiation. The audit should produce a categorized map of every algorithm selection point in your environment, grouped by migration difficulty. The quantum readiness cryptographic inventory guide provides the methodology for this first step.
Second: centralize cryptographic operations wherever possible. API gateways, TLS termination points, signing services, key management services, certificate lifecycle management platforms: these are your agility points. Every cryptographic operation that flows through a centralized platform is an operation that can be migrated by changing the platform’s configuration. Every cryptographic operation that an application handles directly is an operation that requires application-level code or configuration changes. The ratio between centralized and decentralized cryptographic operations is a direct measure of your crypto-agility posture.
Applications that bypass centralized cryptographic services are your technical debt. They should be identified, documented, and prioritized for refactoring. In a large enterprise, eliminating all direct cryptographic library calls is not realistic in the short term. But establishing the policy that new applications must use centralized services, and that existing applications must migrate to them as part of their normal modernization lifecycle, sets the trajectory.
Third: add crypto-agility to procurement requirements today. Every HSM contract, every PKI platform contract, every vendor agreement for devices with 5+ year lifecycles should include contractual requirements for PQC algorithm support and firmware upgradeability. If a vendor cannot commit to supporting PQC algorithms within a defined timeline, that should factor into the procurement decision. This is not a theoretical future requirement. CNSA 2.0 already requires PQC for new NSS acquisitions starting January 2027. NIST IR 8547 targets deprecation of quantum-vulnerable algorithms by 2030. Any device or service purchased today that cannot support PQC before 2030 is a device or service that will need to be replaced before its useful life ends.
Fourth: treat the CBOM as an operational tool. The Cryptographic Bill of Materials should be a living system that feeds architecture decisions, risk assessments, and migration planning. It should be updated continuously through automated discovery, version-controlled, and reviewed as part of regular architecture governance. The Rethinking CBOM article I published argues for this operational model against the more common approach of treating the CBOM as a compliance document produced once and filed away. In the context of crypto-agility, the CBOM is the map that shows you where agility exists and where it does not. Without it, you are navigating blind.
Fifth: eliminate hard-coded algorithm selections in new code. This is a code review and CI/CD policy change, not a technology project. Add a linting rule or code review checklist item that flags direct cryptographic library calls with hard-coded algorithm parameters. Route new code through centralized cryptographic services. Ensure that infrastructure-as-code templates and CI/CD pipeline definitions use parameterized algorithm selections rather than fixed strings. This will not fix the existing codebase, but it stops the problem from growing. Every hard-coded algorithm selection added today is a task on a future migration backlog.
Sixth: plan for algorithm rotation, not algorithm replacement. The PQC migration is not the last cryptographic migration your organization will execute. Algorithms will be revised, parameters will be updated, new threats will emerge, and new standards will be published. NIST has already signaled that additional PQC algorithms (HQC as a backup KEM, FN-DSA for compact signatures) are coming. The standards fragmentation across jurisdictions will produce new requirements that cannot be predicted today. The cryptographic history of the past 30 years (DES to 3DES, MD5 to SHA-1, SHA-1 to SHA-2, RSA-1024 to RSA-2048, now RSA/ECC to PQC) shows a pattern of accelerating transitions, each one more complex than the last.
Design your architecture for the assumption that algorithms will change again, and that the change should be a policy operation rather than a program. If the PQC migration is the forcing function that finally makes your organization crypto-agile, the migration will have been worth the effort regardless of when a CRQC arrives. The architecture you build to support PQC migration, if built correctly, is the architecture that makes every future cryptographic transition operationally tractable instead of existentially threatening.
The PQC Migration Framework at PQCFramework.com provides the structured methodology for executing this program across all eight phases, from securing executive mandate through continuous vendor governance. Practical Steps to Quantum Readiness provides the entry points for organizations at the beginning of the journey. Quantum Ready covers the strategic and organizational dimensions for leaders who need to build the business case.
Crypto-Agility Is Not a Feature. It Is an Architecture.
The phrase “crypto-agile” is easy to say and extraordinarily hard to achieve. It requires centralized cryptographic services, hardware that can be updated, protocols that negotiate, vendors that commit, procurement processes that enforce, and codebases that abstract. It requires a living cryptographic inventory, continuous architectural governance, and the institutional discipline to maintain it all over time.
Most organizations today are not crypto-agile. They have cryptographic choices embedded in hundreds of systems, managed by dozens of teams, constrained by hardware that cannot be updated, governed by protocols they did not write, and dependent on vendors whose timelines they do not control. The gap between “we are crypto-agile” and the architectural reality is, in my experience, the single most underestimated dimension of PQC readiness.
The good news is that closing this gap produces value beyond PQC. A crypto-agile architecture responds faster to algorithm compromises (the next Heartbleed, the next collision attack), adapts to regulatory changes across jurisdictions, enables compliance with divergent international standards, and reduces the operational cost of every future cryptographic transition. The PQC migration is the forcing function, but the agility it demands is a permanent architectural improvement.
The organizations that will handle the PQC transition successfully are not those that picked the right algorithm today. They are those that built the architecture to change algorithms tomorrow.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.