Post-Quantum

Rethinking Crypto-Agility

Crypto-Agility: The Ideal vs. Reality

At its core, crypto-agility means being able to swiftly swap out cryptographic algorithms or implementations when weaknesses emerge. In an ideal world, an organization could “drop in” a new encryption algorithm as easily as a software patch, ensuring they stay ahead of threats like quantum computing. The goal is admirable – if you’re nimble in updating encryption, migrating to stronger algorithms is “no big deal”. In theory, this agility would let us replace vulnerable ciphers (like RSA or ECC) with post-quantum cryptography (PQC) at the drop of a hat, minimizing the window of exposure to quantum attacks.

In practice, however, this ideal is seldom achieved. Outdated algorithms often linger far past their expiration dates because “ripping them out would break things” in complex ecosystems. A vivid example is SHA-1: even after it was declared broken and deprecated, support for SHA-1 hung around for years in many protocols simply because not all the dependent systems could drop it overnight.

For CISOs and technology leaders, this disconnect between theory and reality signals that crypto-agility’s definition needs an update. A true crypto-agility must become an organizational practice – one focused not on effortless algorithm swaps (a nice-to-have that “now feels… fundamentally impractical”), but on continuous, adaptive risk mitigation – by any meams necessary.

The Elusive Ideal of Easy Algorithm Swaps

It’s easy to see why the classical notion of crypto-agility – “swapping out ciphers on the fly” – is appealing. Decades ago, engineers even attempted to build plug-and-play crypto frameworks (for example, adding national algorithms to Windows’ TLS stack via drop-in DLLs). But those attempts revealed how naive this vision was: each new algorithm introduced unforeseen bugs, performance hits, and interoperability failures. Dynamically swapping arbitrary cryptographic primitives in a live system turned out to be far more complex than anticipated.

Why is seamless swapping so hard in practice? The reality is that cryptography isn’t an isolated module – it’s woven through protocols, hardware trust anchors, client-server handshakes, and supply chains of dependent systems. Upgrading a cipher suite in a vacuum is useless if the counterpart system (or that one forgotten IoT device) doesn’t support the new algorithm. True agility requires coordination across both clients and servers, which you often don’t own.

Legacy OT systems (industrial controllers, SCADA devices, etc.) exemplify this challenge: many run on antiquated hardware and proprietary protocols that don’t support modern encryption or can’t be patched easily. An aging smart grid controller might have AES-128 or even custom ciphers baked into its firmware, with no straightforward upgrade path.

Likewise, immutable trust anchors in silicon – say, a hardcoded RSA public key in a device’s ROM or a fixed certificate authority in a car’s ECU – can make the notion of “just swap the algorithm” moot. These devices can’t “flip a switch” to accept a new post-quantum signature scheme or larger key size. The consequence is that organizations end up stuck on a sinking ship, unable to easily grab the lifeboat when cryptography cracks appear.

Continuous (and Varied) Mitigation

Given the rarely attainable ideal of seamless algorithm replacement, maybe we should redefine crypto-agility as continuous cryptographic risk management as the real goal. A capability that detects weaknesses, decides on responses, and deploys mitigations on a rolling basis – basically, continuous discovery of weaknesses and prompt risk reduction using whatever means necessary. In this reframing, mitigation – not just straight replacement – is the heart of real-world agility.

The approach can be summarized in three pillars:

1. Continuous Detection of Vulnerable Cryptography

You can’t fix what you can’t see. The first step is to discover and inventory all cryptographic assets across systems, devices, applications, and protocols. This goes beyond a basic software bill of materials; it means creating a cryptography bill of materials (CBOM) that maps what algorithms and key lengths are in use, where and how.

Many organizations are surprised by how much “crypto debt” lurks in their environments – from hard-coded credentials in legacy code to third-party devices using deprecated ciphers. Most regulators’ first mandate is exactly this: perform a prioritized cryptographic inventory of high-impact systems.

New tools are emerging to aid this effort. This kind of continuous monitoring is the foundation of crypto-agility: you need radar to spot the cryptographic cracks before they turn into chasms.

2. Rapid Decision on Mitigation Strategy (Augmented by AI)

Once a vulnerable or deprecated crypto element is identified, the next challenge is deciding what to do about it – and doing so quickly. Should you drop everything and replace that algorithm everywhere? In reality, that might be infeasible (it could break compatibility or require waiting on vendor patches). Often, the better question is how to mitigate the risk until a full replacement is viable.

This decision-making is complex: it must weigh the sensitivity of the data, the exposure of the system, available alternatives, and the ripple effects of changes.

I believe we can turn to AI-driven analysis to assist in this strategy phase. The goal is an AI that can act as a “cryptographic strategy advisor,” crunching through the inventory data, threat intelligence, and business context to recommend the optimal path. Trained on organization’s context information – CMDB, AMDB, BIAs, network maps, crown jewels assessments, etc. AI could deliver insights on crypto risk and even accelerate remediation to enable fast, smarter decisions about where and how to mitigate.

In an environment where hundreds of applications and devices might rely on a soon-to-be-broken algorithm, such AI assistance is invaluable. It might flag, for instance, that an internal-facing system using RSA-2048 can be tackled later, but an Internet-facing VPN with the same algorithm needs an immediate PQC overlay because it’s at high risk of harvest-now, decrypt-later interception. By analyzing cryptographic dependencies and business impact, AI can help triage which mitigations will most effectively reduce risk with minimal disruption. Ultimately, this is about rapid, informed decision-making – moving at machine speed to counter threats that evolve at machine speed.

And there are numerous mitigations options beyond just the straightforward upgrade to PQC.

3. Practical Mitigation and Deployment:

With a strategy in hand, the final step is to take action – deploying mitigations that actually work in the messy reality of enterprise systems. This is where crypto-agility shines as an organizational muscle. Rather than a simplistic “replace algorithm X with Y everywhere” (which might be impossible on short notice), agile organizations use a mix of clever approaches to reduce risk now, while paving the way for cleaner replacements later. These approaches could include:

Hybrid Encryption

Instead of relying on a single algorithm, two are better than one. For example, in networking, a hybrid TLS handshake might combine a classical key exchange (like X25519 elliptic-curve Diffie-Hellman) with a post-quantum key exchange (like Kyber). This way, even if one algorithm is later broken, the connection’s security still stands on the other. Hybrid modes buy time – they let you introduce quantum-safe crypto alongside legacy crypto, dramatically reducing the risk of “sudden death” if a single algorithm fails. For more, see Hybrid Cryptography for the Post-Quantum Era.

Redundant or Layered Encryption (“Overlay” Networks)

When an underlying system can’t readily be changed, an agile response is to wrap it in a protective layer. Think of it like a secure tunnel encasing an older pipeline. One real-world example: companies have deployed quantum-safe overlay tunnels to protect data links that still rely on legacy encryption. Some solutions, for eample, create a parallel AES-encrypted tunnel where the AES key exchange is secured by a PQC algorithm or even quantum key distribution. The original data (which might be using vulnerable RSA/TLS underneath) is thus traveling inside an outer, quantum-resistant shell. An eavesdropper would need to break the outer PQC layer to get to the inner data – an astronomically harder task.

This “encrypt the encryptions” strategy adds a quantum-safe shield without waiting to overhaul the inner system. Early adopters in finance and government are using such layered encryption as defense-in-depth: for instance, layering a PQC VPN tunnel over existing VPNs between data centers. Even if the inner VPN uses a weaker cipher, the outer layer keeps the data safe from future decryption.

The beauty of overlays is that they can often be added transparently via network appliances or middleware, with no code change to legacy endpoints.

Segmentation and Isolation

In cases where neither swapping algorithms nor adding encryption layers is feasible (say the system is too old or performance-sensitive), a mitigation can be to contain the risk. Network segmentation is a time-tested security move, and it applies here too. If an IoT device or OT workstation uses an insecure protocol that can’t be fixed immediately, an agile strategy might cordon it off in a segregated network zone with strict access controls and monitoring. By limiting exposure, you reduce the chances that an attacker can exploit that weak crypto from afar.

Segmentation doesn’t solve the cryptographic weakness, but it can buy breathing room until a better fix is in place. Additionally, segmentation can be combined with inline controls; for example, enterprise firewalls and proxies today can enforce cryptographic policies – blocking any connections that try to use disallowed, weak algorithms.

In effect, you “virtually patch” a weak endpoint by not letting it communicate insecurely beyond a confined area.

Trust-Anchor Bridging (Cryptographic Gateways)

A particularly clever mitigation emerges when you have legacy devices that only “speak” an old cryptography. Instead of trying to retrofit each device, you introduce an intermediary that speaks both languages. These cryptographic gateways serve as translators between old and new crypto regimes. For instance, imagine a legacy industrial controller that only supports RSA-based TLS. A gateway appliance in front of it could handle an incoming connection from a client using a PQC-enabled TLS 1.3 suite, then terminate and re-encrypt the session to the controller using the legacy RSA cipher it understands. The controller sees nothing different – it still talks RSA – but externally the connection was quantum-safe.

Likewise, if you have a hardware token or secure module that only trusts a specific RSA-2048 certificate (an immutable trust anchor), you might bridge that trust by cross-signing a new, stronger root certificate with the old one’s key. The gateway (or a PKI trick) thus ensures that devices with baked-in trust can validate new cryptography indirectly. Bridging solutions like these are recommended by experts as compatibility bridges for PQC deployment, allowing organizations to roll out quantum-safe algorithms without waiting for every last device and application to natively support them. In essence, the gateway speaks post-quantum on one side and legacy on the other, keeping everyone happy and secure.


The unifying theme across these tactics is mitigation over perfection. Rather than an idealized swap of Algorithm A for B everywhere, an agile organization mixes and matches approaches to reduce risk swiftly. Hybridization, layering, isolation, bridging, and other options are all tools in the CISO’s toolkit to extend the life of legacy systems safely or add quantum-resilience without ripping out roots of trust immediately.

Crucially, these measures can often be done in parallel with longer-term migrations. For example, you might deploy a PQC overlay tunnel this year to protect a critical data flow, while also working with the vendor of the underlying system on a roadmap to support native PQC by 2028.

Crypto-agility means always having a Plan B (and C, and D) for when Plan A (instant replacement) isn’t possible. And don’t be fooled, Plan A will not be immediately possible for vast majority of your quantum-vulnerable cryptography.

Toward an AI-Driven, Continuous Crypto Strategy

Looking ahead, the next evolution of crypto-agility should be about transforming into a continuous, AI-assisted “cryptographic strategy advisor” for the enterprise. In much the same way that companies have adopted continuous monitoring and automated incident response. This isn’t a distant fantasy – there’s nothing stopping us from developing this:

AI-Augmented Analysis

I already developed for my own use a simple AI model to correlate cryptographic inventories with threat landscapes and business priorities. I know some of the larger players are thinking along the same lines. Eventually, such a system can scan an environment for cryptographic weaknesses and instantly suggest the best mitigation (e.g., “Enable hybrid mode on these 50 external-facing servers first, because your CBOM shows they protect data that must remain confidential through 2035”).

By training AI on CBOM data, known vulnerabilities, and best-practice libraries, as well as your assets, configurations, business impact analyses, dependencies, criticality ratings, previous incident reports, etc. organizations can get proactive recommendations at a scale and speed humans alone could never achieve.

Automated Remediation & Orchestration

The endgame is not just advice, but automated action where safe and feasible. The highest maturity level for crypto-agility should be a state where organizations automate continuous monitoring and trigger remediations without human intervention.

In practical terms, this could mean an AI/automation pipeline that, for example, detects a non-compliant cipher usage on a network segment and automatically pushes a policy update to block it or initiates a workflow to reconfigure those systems.

Some enterprises have already started integrating cryptographic management into DevOps pipelines – for instance, if a developer tries to use a banned algorithm, the CI/CD system flags or replaces it before merge.

In the future, we can imagine AI agents that dynamically adjust cryptographic settings across a fleet in response to new intel (e.g., if a weakness is announced in an algorithm, the AI immediately raises the risk score of all systems using it, and if possible, deploys a compensating control network-wide).

The goal is a closed-loop system: monitor -> analyze (AI) -> act -> repeat, constantly optimizing the cryptographic posture.

Institutionalized Crypto Risk Governance

Achieving continuous mitigation isn’t just a technical challenge; it’s an organizational one. Leading companies are beginning to treat cryptographic risk as a first-class citizen in risk registers and governance processes. This means establishing crypto steering committees or task forces that regularly review the state of cryptographic controls (fueled by the rich telemetry from the AI tools above).

It also means upskilling the security team in cryptography, so they can understand AI recommendations and make nuanced decisions when human judgment is required.

The transition to post-quantum algorithms in the coming years is not an end in itself, but the first big test of this continuous capability.

Conclusion: Building Adaptive Crypto Resilience

The message for CISOs and tech leaders is clear: cryptographic agility is about building systems that can evolve gracefully and preserve trust under pressure, whether that pressure comes from a cryptanalytic breakthrough or a regulatory mandate. Achieving this means investing in the people, processes, and technology that prioritize cryptographic vulnerability management instead of cryptography swapping ability.

It means reframing success: not “we swapped out RSA for Kyber everywhere,” but “we can detect any cryptographic weakness and rapidly neutralize its risk to our business.”

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap