Post-Quantum

PQC Is Necessary, But Not Sufficient – Building Quantum Resilience the Right Way

Introduction

Post-quantum cryptography (PQC) is finally moving from theory to practice. In August 2024, NIST released three core PQC standards – a lattice-based key encapsulation (Kyber, standardized as ML-KEM in FIPS 203), a lattice-based digital signature (Dilithium, ML-DSA in FIPS 204), and a stateless hash-based signature (SPHINCS+, SLH-DSA in FIPS 205). These algorithms are believed secure against quantum attacks, and early adopters have already begun integrating them. In March 2025, NIST even selected an additional encryption algorithm (HQC, based on error-correcting codes) as a backup in case weaknesses are ever found in the main lattice KEM. This growing PQC toolbox is a huge step forward in addressing the quantum threat.

However, simply “dropping in” PQC algorithms will not magically make systems quantum-safe. Real security hinges on how these new primitives are implemented, integrated, and layered into our systems. A quantum-resistant algorithm on paper can still fail in practice due to coding bugs, side-channel leaks, protocol limitations, or misuse within a larger insecure design.

In short: PQC is necessary but not sufficient. It must be one pillar of a broader, multi-layered strategy for resilience. This is especially true as we face not only future quantum-enabled adversaries, but also increasingly automated, AI-powered attacks and the ever-present toolkit of classical exploits. The only credible defense is a holistic approach that combines PQC with robust implementation practices and defense-in-depth design across our infrastructure.

Lately in my discussions in the industry I noticed a bit of over-reliance on PQC. Statements such as “the problem is solved” or “we now just need to wait on our vendors to integrate PQC” I hear repeatedly. And those couldn’t be further from the truth. So let’s explore why over-reliance on PQC alone is dangerous. The goal is to educate that while PQC is a game-changing development, it’s not a silver bullet. Instead, it’s a critical transitional technology to be combined with implementation hardening, crypto-agility, layered encryption, modernized PKI, and other best practices to achieve long-term security.

The State of PQC Standards – And the Pitfalls Ahead

With NIST’s PQC algorithms finalized, the world is entering a new crypto transition. The standards provide a necessary foundation, but they are only the beginning. Experience has shown that standards alone don’t guarantee security outcomes – how we implement and deploy these algorithms matters enormously.

Algorithm vs. Implementation

One immediate pitfall is assuming a secure algorithm means a secure system. A striking example came to light in 2024 with KyberSlash, a family of timing side-channel vulnerabilities in certain implementations of Kyber (ML-KEM). The cryptographic math of Kyber remained sound, but some software libraries had subtle coding issues – notably, secret-dependent division instructions – that leaked information during decapsulation.

Researchers demonstrated they could recover a Kyber-768 private key in minutes on a commodity processor when these variable-time operations were present. In a practical attack, an adversary could repeatedly send crafted ciphertexts to a vulnerable server and measure the slightly different response times; with enough queries, the full secret key emerges. In some cases, even remote exploitation was feasible by timing network responses. The fix did not require a new algorithm – it required engineering the implementation to remove secret-dependent behavior (e.g. using constant-time arithmetic and compiler settings). In other words, the algorithm was fine; the implementation was the weak link.

PQC signature schemes face similar issues. CRYSTALS-Dilithium, the new lattice-based signature, has been proven secure mathematically, yet it too can be undermined by side channels or faults if implementations aren’t carefully hardened. Academic teams have already shown power analysis and timing attacks that extract Dilithium private keys from poorly secured implementations – even in some that attempted countermeasures. In one case, a single power trace from an unprotected Dilithium-2 signing operation (captured via electromagnetic emanation) gave a machine-learning algorithm enough leakage to recover half the secret key, and with a few traces the entire key could be reconstructed.

Other researchers demonstrated that a single fault injection during Dilithium’s deterministic signing process could break it entirely, highlighting why Dilithium’s latest standard version defaults to a “hedged” (randomized) mode for safety.

The lesson is clear: PQC implementations demand the same rigor we apply to classical crypto – constant-time code, masking, fault detection, careful code review and testing on real hardware. A theoretically quantum-secure algorithm offers little comfort if a hacker can simply read your secrets via a timing leak or glitch your device to bypass its security. Solid engineering and diligence are mandatory to realize the promise of PQC.

Crypto-Agility and Surprises

Another reason PQC isn’t a one-shot solution is that the field itself is still young and evolving. During NIST’s multi-year standardization process, we witnessed major candidates catastrophically fail under classical cryptanalysis – a humbling reminder that cryptography is an arms race of ideas. In 2022, the SIKE algorithm (an isogeny-based key exchange once touted as a quantum-safe option) was completely broken by researchers using a classical computer, who found a clever algebraic attack to recover SIKE private keys in about an hour on a single core.

That same year, the Rainbow signature scheme – a multivariate algebra system – was defeated even more dramatically: a crypto researcher discovered an attack that recovered Rainbow’s secret key on a laptop in roughly 2 days (53 hours), effectively “breaking Rainbow in a weekend”.

NIST promptly eliminated both SIKE and Rainbow from contention.

Another digital signature candidate, GeMSS, succumbed to improved “rank” attacks that slashed its security estimates, undermining confidence in its future.

None of this was a failure of the NIST process – rather, it was the process working as intended, weeding out fragile designs. But it underlines an important point: cryptographic algorithms can and will fail, sometimes suddenly. We must design systems with crypto-agility – the ability to swap out or update algorithms – as a core requirement. If one of the new PQC algorithms hits a snag in five years, we’ll need a rapid way to patch or replace it across all our systems. This is why many experts advocate hybrid approaches (combining classical and PQC algorithms) and emphasize not relying on any single primitive. Agility and diversity in our crypto mitigates the risk of “cryptanalysis whiplash.”

Integration Challenges (PKI, Protocols, and Plumbing)

Adopting PQC also runs into very practical issues of size and performance, which can’t be ignored. PQC public keys and signatures tend to be much larger than RSA or ECC. This bloats digital certificates, TLS handshakes, and other protocols in ways that can break assumptions in existing infrastructure.

For example, early experiments in 2023-2024 with hybrid TLS (adding a Kyber-based key exchange to TLS 1.3 alongside classical X25519) uncovered problems with network middleboxes and legacy systems. Some old firewall and VPN devices had hardcoded assumptions about handshake message sizes – they expected, say, a ClientHello to fit in one packet – and when the post-quantum key data made it larger, the connections were dropped or disrupted. Similarly, some TLS terminators choked on the new cryptographic code points or the increased certificate sizes. Google and Cloudflare engineers reported several such interop hiccups during their public post-quantum TLS beta deployments. Even after standards are finalized, these issues persist: when NIST changed Kyber’s identifier to an official FIPS code point (ML-KEM) instead of the draft ID used in experiments, clients and servers had to be updated in sync – any mismatch caused failed handshakes.

All of this means PQC rollout must be done carefully and with flexibility. Protocols may need extensions or tweaks (for example, fragmentation of large messages, or “hybrid” structures to carry multiple keys). Industry groups are on it – ETSI has defined hybrid key establishment techniques that combine an elliptic-curve Diffie-Hellman secret and a PQC KEM secret into one stronger shared key, and the IETF LAMPS working group is standardizing formats for PQC certificates and even composite certificates that can contain both a classical and a post-quantum public key/signature together. These measures help smooth the transition by maintaining compatibility. But the takeaway is that upgrading to PQC is not like flipping a switch. You have to consider the whole ecosystem – certificates, software stacks, hardware offload engines, network gear, client support – and be ready for some heavy lifting in terms of upgrades and troubleshooting. Integrating PQC without also modernizing your PKI and protocols can “secure” one part (the math) while breaking others, inadvertently creating new denial-of-service or security issues.

In summary, the arrival of standardized post-quantum algorithms is a milestone to celebrate, but security professionals must approach PQC adoption with eyes wide open. The algorithms alone won’t save us unless we implement them correctly and redesign systems around them. Next, we’ll look at the urgency of getting this right sooner rather than later – and then outline how to do it in a layered way.

“Harvest Now, Decrypt Later” – The Race Against Time

Why all the urgency about transitioning to PQC? The danger is not merely theoretical or some far-off problem for future CISOs; it’s immediate. Nation-state adversaries are almost certainly recording today’s encrypted data traffic with the intent to decrypt it later when quantum capabilities become available. This is the infamous “harvest-now, decrypt-later” tactic. Even if a functioning cryptographically relevant quantum computer (CRQC) is a decade away (the exact timeline is uncertain), any sensitive information with a shelf life of 10+ years is at risk now. An eavesdropper can store your encrypted VPN sessions, database dumps, or confidential emails intercepted in 2025, and then in 2035 feed them to a quantum computer to reveal all the secrets. Thus, the clock is ticking for data with long confidentiality requirements – think medical records, intellectual property, state secrets, critical infrastructure schematics, etc., that must remain secure for decades. Every year that goes by without PQC increases the window of vulnerability for that long-lived data.

Governments around the world have recognized this threat and are sounding the alarm for immediate action. In the U.S., a flurry of directives in late 2022 and 2023 (NSM-10, OMB Memo M-23-02, the Quantum Computing Cybersecurity Preparedness Act) mandate federal agencies to inventory their cryptographic usage and prioritize systems for migration to PQC. Agencies were given tight deadlines – e.g. a comprehensive crypto inventory within 6 months, and an actionable migration plan to post-quantum algorithms shortly after. High-impact systems and any data or devices that will still be sensitive beyond 2035 are to be addressed first. The U.S. NSA’s updated CNSA 2.0 suite (the cryptographic standards for national security systems) explicitly requires quantum-resistant solutions. NSA has set target dates like 2027 by which new classified systems should use PQC (or approved hybrids), and even in the interim they’ve advised using larger symmetric keys and pre-shared keys as stopgaps.

Similar urgency is echoed internationally – bodies in Europe, Japan, Canada, etc., have all published roadmaps urging critical industries to begin the PQC transition well before a quantum computer arrives.

The consistent message is: don’t wait. As the U.S. National Cyber Director recently put it, “the threat isn’t just on the horizon; it’s here now” – meaning the harvest-now threat is already active. Organizations, especially in critical infrastructure and government, should treat the post-quantum migration as a present-day project, not a future contingency. This involves dedicating budget and personnel to the effort, just as was done for Y2K or other major tech transitions. Inventory your uses of public-key crypto (where do you use RSA, ECC, DH, etc.?), assess the sensitivity and longevity of the data those systems protect, and formulate a migration plan. Not everything will (or can) be upgraded overnight – but you want a prioritized list so that you tackle the highest-risk items first.

A key concept here is “secrecy lifetime” – how long the confidentiality of a given piece of data must last. You should categorize your data and systems by secrecy lifetime: if it’s more than a few years, that system should be high on your PQC migration list. For instance, an online banking session might only need to stay secret for minutes (after that transaction is done, disclosure might be less critical), whereas a power grid blueprint or a genome data set might need decades of secrecy. The latter category is where harvest-now, decrypt-later bites hardest, and thus needs PQC (or interim protections like extra symmetric encryption) as soon as possible.

In practical terms, being quantum-ready now means instituting things like: crypto agility (ability to easily swap libraries and algorithms), crypto inventory BOMs (knowing exactly which software and devices use vulnerable crypto), and possibly deploying hybrid encryption (classical+PQC) in critical links to safeguard against the unknown. The good news is that symmetric cryptography – the kind used for bulk encryption – is largely safe from quantum attacks if key sizes are big enough. Grover’s algorithm can accelerate brute force guessing of keys, but it only provides a quadratic speedup. That “only slightly weakens” the effective security of symmetric ciphers and hash functions. For example, Grover could reduce AES-256’s strength to about 128-bit, and AES-128’s to about 64-bit. Consequently, simply using 256-bit keys (and moving to SHA-256 or SHA-3 for hashing) is a sufficient hedge for symmetric encryption – something many organizations have already done, per current NIST guidance. There’s no equivalent simple fix for public-key crypto; that’s why PQC for public-key schemes (encryption, key exchange, digital signatures) is urgent.

So the bottom line on timelines: the threat of a quantum adversary “later” is influencing security decisions now. Smart adversaries aren’t going to announce when they achieve quantum decryption capability – they will quietly exploit it. Our window to get ahead of them is the next few years. By laying the groundwork today (inventories, planning, pilot implementations), we can ensure that when PQC is deployed at scale, it’s done thoughtfully and effectively. Conversely, if we ignore the issue until a quantum breakthrough is confirmed, we’ll already be on the back foot. In critical infrastructure and other sensitive sectors, that could be disastrous.

Defense-in-Depth: PQC as a Pillar, Not the Whole Building

Given the above, how should organizations proceed? The clear answer from experts is that systemic, multi-layered resilience is the only effective strategy – against quantum threats and any others. No single technology, even PQC, will suffice on its own. We need defense in depth, where PQC is one layer among many, reinforcing overall security. Both NIST and ENISA have emphasized that the PQC transition is as much about robust system engineering as it is about cryptography.

One immediate tactic during this transition period is hybrid cryptography – using classical and post-quantum algorithms in tandem. This is a risk-management move: by combining (for example) an elliptic-curve Diffie-Hellman key exchange with a Kyber key encapsulation, you get a composite key that an attacker must break both algorithms to defeat. If one algorithm turns out weaker than thought, the other still provides protection. Hybrid modes are already being specified for TLS, IPsec, and other protocols. ETSI’s technical standards explicitly describe “cascaded” and “concatenated” hybrid key exchanges that use two independent keys combined via a KDF. Likewise, the draft IETF specs for X.509 certificates include composite certificates capable of holding (for example) both an RSA and a Dilithium signature on the same certificate. Such approaches ease migration – old systems can ignore the new algorithm if they don’t understand it (using the classical part), while new systems get PQC strength but still fall back on classical if needed for compatibility. Hybridization is not a long-term solution (eventually the old algorithms will be dropped), but it’s an important bridge over the next decade. It also provides cryptographic “hedging” against the uncertainty in PQC – recall that multiple PQC candidates fell to cryptanalysis during standardization, so it’s prudent not to put all our eggs in one lattice-based basket. NIST’s guidance via the NCCoE and ENISA’s reports both recommend considering hybrid deployments as part of early migration.

Another layer to consider is augmenting public-key crypto with symmetric secrets during the interim. One idea gaining traction is to mix a pre-shared key (PSK) into your key exchanges. For instance, some VPN protocols can be configured with a long random pre-shared key that’s combined with the usual public-key exchange. Even if the public-key part (RSA/ECDH) is later broken, the attacker would also need the PSK to decrypt the traffic. This concept is mentioned in ENISA’s quantum mitigation studies – they suggest “mixing pre-shared keys into all keys established via public-key cryptography” as a mitigation that can be implemented right now. It’s essentially an extra belt to go with the suspenders. The U.S. NSA’s Commercial Solutions for Classified (CSfC) program similarly allows certain layered architectures (for example, two layers of encryption, one of which could be a one-time pad or symmetric-only link) to mitigate quantum risk in the near term. The takeaway: use symmetric crypto’s strength to your advantage whenever you can. AES-256, one-time pads for link encryption where feasible, HMACs, etc., remain rock-solid. Design your systems such that even if an attacker could crack one layer of encryption, another independent layer protects the most sensitive data.

It’s also critical to recall that cryptography is just one part of a secure system. Quantum computers won’t hack you by magic; they will break certain mathematical problems. But if your system has other weaknesses, those will be exploited as well – likely far sooner than someone builds a large-scale quantum computer. For example, if you migrate to PQC but leave your servers unpatched against classical exploits, or your users susceptible to phishing, a quantum adversary might not need to bother attacking the crypto at all! Thus, quantum readiness should be viewed as one component of overall cyber resilience efforts (alongside zero-trust architecture, regular threat monitoring, software supply chain security, etc.). In fact, the advent of powerful AI tools on the offensive side (for automating phishing at scale, finding zero-days, deepfaking identities, etc.) means our classical security problems are not going away – they’re getting harder. ENISA’s threat landscape analyses and MITRE’s ATLAS framework (Adversarial Threat Landscape for AI) both urge organizations to plan for “AI-accelerated” attackers. This could manifest as AI rapidly triaging vulnerabilities or crafting very convincing social engineering. In that context, relying on any single defense is unwise. We must layer protections so that even if one fails – be it a cryptographic scheme or an AI anomaly detector or a firewall – others still stand.

The Human and Process Angle: A multi-layered defense also means preparing your people and processes. A quantum-resilient system isn’t just about algorithms; it’s about having the organizational agility to pivot when something breaks. Do you have an incident response playbook for a sudden cryptographic weakness? (E.g., “If tomorrow an attack on ML-KEM is published, here’s how we’d rapidly revoke and replace all our certificates and re-encrypt sensitive data.”) Such playbooks should be in place. Teams should also be training now on PQC concepts, testing out the new algorithms in labs, and integrating post-quantum considerations into architecture reviews. The worst scenario would be to wake up to news of a breakthrough and have neither tools nor skills ready to react.

In essence, quantum resilience = systemic resilience. It’s not a product you buy, it’s an approach you embed at every level: crypto choices, software implementation, network architecture, user policies, supply chain, and incident response. PQC provides some powerful new bricks for this fortress, but you need to lay them with good mortar and surround them with other reinforcing structures.

Implications for Critical Infrastructure and OT Systems

Implementing a multi-layered PQC strategy is challenging in any IT environment, but critical infrastructure sectors face unique hurdles that make it both more difficult and more urgent. Sectors like energy, transportation, telecommunications, healthcare, and industrial control systems (ICS/SCADA in manufacturing or utilities) have a history of using long-lived equipment and protocols that were not designed with frequent crypto updates in mind. It’s not uncommon for industrial control devices or embedded systems in these sectors to have 10- to 20-year lifecycles, with updates only applied during rare maintenance windows. Many operate under tight real-time constraints and with an overriding emphasis on availability and safety – meaning any change that could introduce latency or downtime is resisted. Unfortunately, PQC algorithms (with their larger keys and outputs) can strain these constraints, so migrating critical infrastructure will require extra care.

Consider an operational technology (OT) network in a power plant or factory. The devices (PLCs, sensors, relays) might have very limited CPU/memory, and the communication protocols might be basic, with small packet sizes and no room for large certificates or multi-packet handshakes. Introducing PQC here – say, to secure a firmware update or a VPN tunnel into the plant – might hit technical roadblocks. One issue is performance: a PQC handshake or signature verification might simply be slower or larger than the legacy system tolerates. If a substation controller expects an authentication to be done in 2ms and the PQC algorithm takes 10ms, that could be unacceptable. Another issue is ecosystem fragility: lots of OT protocols are very sensitive to changes. Some critical systems still use proprietary or outdated cryptographic schemes not because they’re secure, but because they’re the only thing compatible with certain hardware. Replacing them with PQC might require a hardware refresh or a protocol overhaul, which is costly and slow to deploy.

Nonetheless, critical infrastructure operators must start planning for this now, because they are high-value targets for adversaries. The recently released CISA guidance on Post-Quantum Considerations for OT specifically points out that OT environments differ greatly from IT in terms of asset lifecycles and crypto usage, and that implementing PQC in many OT systems will be a “significant and enduring challenge”. But it must be tackled. The guidance suggests steps like inventorying all places where public-key crypto is used in OT (for example, in remote sensor communications, or VPN access to control networks, or code signing for PLC firmware) and then working with vendors to ensure there are upgrade paths. It also recommends segmenting networks such that even if one cryptographic control fails, an attacker doesn’t automatically get full run of a critical OT network – an application of zero-trust principles to industrial settings.

For a deeper dive into OT challenges and approaches see my previous post: Upgrading OT Systems to Post‑Quantum Cryptography (PQC): Challenges and Strategies.

Another implication for critical sectors is the need for tailored migration playbooks. For instance, in the power grid sector, there are standards like IEC 61850 and IEEE protective relay protocols – sector-specific bodies should be updating those to include PQC options or at least not preclude them. The healthcare sector might focus on the PKI for medical devices (ensuring that certificate update mechanisms can handle PQC). The aviation and automotive sectors will need to ensure future hardware (like next-gen aircraft communication systems or car onboard security modules) have the horsepower and tested libraries to do PQC, since those vehicles will be in service into the 2040s and beyond. Regulators and industry groups are starting to weigh in: the U.S. FDA has begun asking medical device manufacturers how they plan to address the upcoming transition, and we can expect similar inquiries in other sectors.

One more point: don’t forget the physical and safety aspect. Critical infrastructure cybersecurity has to be ultra-cautious about changes because mistakes can cause physical outages or safety incidents. So, any introduction of new cryptographic tech like PQC needs thorough testing in non-production environments. Imagine if a miscalculation in a buffer size for a PQC handshake caused a controller to crash – if that controller is running a power grid, the result could be a blackout. This is why NIST’s guidance (e.g., SP 800-82 for ICS security) emphasizes robust testing and phased deployment. Start by trialing PQC in a small, isolated segment of your OT network, observe the impacts (latency, reliability), adjust, and then expand the rollout. It’s also wise to budget for hardware upgrades: some legacy devices simply won’t be able to run PQC algorithms due to limited CPU or memory. Those may need replacement with newer models or addition of companion devices (like a gateway that does the heavy crypto on behalf of a dumb sensor). Knowing this, CISOs in critical sectors should be engaging with their procurement teams now to insert PQC-readiness as a requirement for new equipment – whether it’s an RTU for the grid or an MRI machine in a hospital. Long lead times mean decisions made today will determine if that gear can be upgraded in 5 years or not.

In short, critical infrastructure organizations face a “perfect storm” of long-lived tech, high stakes, and advanced adversaries. They should treat the PQC transition as part of broader resilience initiatives (like safety system modernization and zero-trust networking). By doing pilot projects now and sharing lessons learned (perhaps through industry ISACs or coordination with agencies like CISA), they can avoid being caught off-guard. The worst outcome would be doing nothing and then, say, around 2030 when a quantum code-breaking capability seems imminent, scrambling to rip-and-replace crypto in systems that were never designed for it. Far better is to lay the groundwork gradually, starting today.

A Blueprint for “PQC+” Resilience: Practical Steps

How can an organization put all this together into a concrete plan? Let’s outline an opinionated blueprint – a step-by-step program for CISOs and security architects to build quantum-resilient systems the right way. Think of it as a checklist that treats PQC as one vital component in a larger security strategy.

1. Establish Crypto Governance and Agility

Begin by putting in place the processes and people responsible for your cryptographic health. Designate an owner (or team) for cryptographic migration – someone accountable for tracking crypto across the enterprise (often this will be an enterprise architect or head of security engineering).

Implement a cryptographic inventory: you need to know all the places where cryptography is used in your organization, including libraries, protocols, devices, and third-party services. This is akin to a “crypto bill of materials.” Within that inventory, categorize each system by the type of cryptography and, importantly, the secrecy lifetime of the data it protects. Identify systems and data that need to remain secure 10+ years into the future – those handling long-term sensitive data (personal data, intellectual property, government classified, etc.). These are high-priority for PQC. Also identify any proprietary or non-standard cryptography – those often hide legacy weaknesses that PQC transition could address as an added bonus.

Next, plan for agility. Develop playbooks for how you will update or swap out cryptographic components. For example, if you use a certain TLS library, how will you deploy a version supporting PQC? How will you roll back if there’s incompatibility? How quickly could you re-issue certificates or keys across your environment if needed? Tabletop these scenarios. Many organizations are now instituting “crypto agility” policies – requiring that any new system procured must support algorithm flexibility (no hard-coded, unchangeable crypto). If you have vendors or suppliers, start asking them about their quantum-safe roadmap – do they have products that support PQC or at least configurable crypto? This pressure from customers will accelerate readiness across the supply chain.

Align your plans with published guidance: NIST’s NCCoE has a draft practice guide (SP 1800-38) on migrating to PQC, which gives a detailed “project plan” template for enterprises to follow (covering inventory, risk assessment, proof-of-concept migrations, etc.). CISA/NSA’s joint factsheet on Quantum-Readiness also provides a high-level roadmap for organizations. Use these resources to ensure you’re not missing steps. Essentially, treat the PQC transition as a formal project – like a digital transformation initiative – complete with leadership support, budget, timelines, and KPIs (e.g., “by Q4 2024, all Tier-1 systems have been tested with PQC alternatives available”). A strong governance foundation will pay off when you start making the actual technical changes.

2. Embrace Hybrid Cryptography – in Both Data and Identity Planes

As discussed, hybrid approaches will be key to a smooth migration. In practice, this means:

For data-in-transit (communications): start enabling hybrid key exchange in protocols like TLS, VPN, SSH, and Wi-Fi as options. For example, TLS 1.3 can be configured to do a classical ECDH (X25519) and a post-quantum KEM (like Kyber) simultaneously, deriving a shared key from both via a combiner function. Cloudflare and Google’s experiments showed this is feasible and can be rolled out at scale. Many libraries (OpenSSL, BoringSSL, AWS s2n, etc.) are adding support for hybrid key exchanges following the emerging standards (IETF has an RFC draft for “Hybrid key exchange in TLS”). You can pilot this now on internal systems or even internet-facing ones if you whitelist clients that support it. Measure the impact: how much does the handshake size increase? (Possibly a lot – e.g., X25519 + Kyber768 key shares will add a few KB). Does that cause any fragmentation or middlebox issues on your network? Test different paths and record if anything breaks. Early testers found that some middleboxes would drop TLS ClientHello messages that exceeded a certain size – knowing this, you might need firmware updates from vendors or adjustments like enabling TLS 1.3 grease mechanisms that fragment the handshake. Better to discover these issues in a controlled pilot than during a crisis. The goal is that, by the time PQC is standardized and widely available, your network and applications have already digested the necessary changes via hybrids.

For data at rest: you can also adopt a hybrid mindset. One approach some are taking for high-security data is dual-encrypting stored files or databases – e.g., encrypt with AES-256 (which is quantum-resistant for practical purposes) and also encrypt the key with a post-quantum algorithm (or even two different PQ algorithms). This double-wrapping means even if one cipher is later cracked, the data remains safe unless the other is too. It’s like belt and suspenders for stored data. This might be overkill for general use, but for ultra-sensitive archives, it’s worth considering as an interim measure.

In the identity/authentication realm: examine how you might use hybrid or composite certificates. The IETF LAMPS drafts on composite certificates allow a single certificate to contain two public keys and two signatures (one classical, one PQ). This could ease migration for things like code signing or document signing, where you want continued support in older systems (using the classical signature) while adding PQ protection. While these standards are still being finalized, some products already offer “composite certificate” support in testing mode. At a minimum, plan to maintain parallel credential systems if needed – for example, you might issue all users a post-quantum VPN certificate in addition to their existing one, and configure the server to accept either/or (until one day you drop the old).

Also consider pre-shared keys as an added layer in particularly sensitive environments (the NSA’s interim guidance suggests this for classified systems). For instance, a government may decide that until PQC is fully validated, they will operate sensitive links in a dual mode: one using a classical or PQ public-key exchange plus a one-time pre-shared pad distributed via courier. This is cumbersome but provides immediate quantum security. If you operate critical infrastructure that absolutely cannot fail, such measures might be justified in the short term.

3. Prioritize Secure Implementation (No “Side Doors”)

No matter how strong an algorithm is on paper, a single side-channel flaw can render it ineffective. So, as you deploy PQC, treat implementation security as non-negotiable. This means enforcing coding best practices and testing for leaks. For any software-based implementation of PQC in your environment, ensure it is constant-time with respect to secret data. This typically involves using well-vetted libraries (like those from the PQClean project or commercial SDKs that have been hardened). But don’t assume – verify. In the KyberSlash incident, many projects thought their implementation was constant-time when in fact certain compiler optimizations introduced secret-dependent divisions. Now there are tools (some open-source, like a patched Valgrind from the KyberSlash researchers) that can scan binaries for variable-time instructions. Use them as part of your CI pipeline when building PQC code.

If you develop your own embedded code for PQC (say, in a smart card or IoT device), invest in side-channel countermeasures: blinding, noise injection, masking of intermediate values, etc., as appropriate to the algorithm. Lattice-based schemes like Kyber and Dilithium have known side-channel attack vectors, but also known defenses (for example, one can randomize the noise vectors in Dilithium signing to thwart certain power analysis attacks, at some performance cost). Make those trade-offs consciously. Importantly, engage experts or labs to test your implementations. This might involve having a third-party lab perform a power analysis on your device to see if the key can be extracted in a single trace (which some research has shown is possible on unprotected Dilithium). Yes, this is a level of scrutiny above normal software QA – but if the asset being protected is critical, it’s worth it. The history of hardware security modules (HSMs) is instructive here: many HSMs advertising RSA/ECC protection had to evolve over years to add constant-time fixes and shielding against timing and cache attacks. We must not assume newly minted PQC hardware or code is magically secure – it will have a hardening journey too.

Include fault tolerance as well. Fault attacks, where an attacker induces a glitch in a device to make it act in a non-standard way, have broken some “secure” algorithms in labs. Dilithium’s deterministic mode, for instance, was shown to be highly vulnerable to a single random fault injection, which is why the hedged mode is now default. If you’re deploying PQC in untrusted environments (like a smartcard given to the public), consider using either hash-based signatures (which are very robust against faults by design) or implementing runtime checks that detect abnormal conditions during cryptographic operations (e.g., verifying intermediate computations or using redundant computation and comparing results).

Finally, pay attention to randomness and entropy in your implementations. PQC algorithms often rely on good random number generation (for salts, nonces, noise vectors). Ensure your RNGs are strong (conformant to NIST SP 800-90A if possible) and that they are properly seeded, especially on embedded/IoT devices which sometimes struggle with entropy. A predictable RNG can completely undermine otherwise secure cryptography.

4. Modernize Your PKI and Protocols

The move to PQC is a chance to overhaul legacy PKI infrastructure that may already be creaking under strain. Many organizations have old certificate authorities, outdated TLS configurations, etc. Now is the time to bring those up to scratch in preparation for issuing PQC credentials. Set up a lab environment where you experiment with PQC certificates. For example, try building an X.509 certificate chain where the end-entity cert uses a Dilithium public key and is signed by a Dilithium CA. See how large the certs are (spoiler: Dilithium certs are bigger, but not insanely so – a few kilobytes). Test how your OCSP or CRL distribution handles larger signatures. You might find you need to enable stapling to avoid sending huge CRLs. Also test path length and chain depth issues; some software might have arbitrary limits that a PQC cert chain violates. The IETF LAMPS working group is actively defining the standards here – keep an eye on their drafts for “dilithium-certificates” and “sphincs-certificates” profiles. These will likely become RFCs that vendors follow, so aligning with them early is wise.

Be prepared for interoperability hiccups during this period. It’s almost certain that when, say, a web browser first enables a post-quantum mode by default, some middlebox or backend system will break (as we’ve seen in tests). The way to mitigate this is through gradual, well-monitored rollouts. Enable PQC ciphersuites on a subset of servers and closely monitor connection success rates, user agent types, error logs, etc. Collaborate with vendors – many vendors will release patches to make their appliances PQ-friendly (for example, some firewall might need an update to allow larger handshake messages). But you might be the one to discover the issue and inform them. So allocate time for that kind of work (e.g., “spend Q1 2025 testing PQC in dev environment and reaching out to vendors X, Y with any issues found”).

Key management systems and HSMs are part of PKI too – check with your HSM provider about PQC roadmaps. A lot of HSMs use hardwired RSA/ECC accelerators which won’t support PQC algorithms without firmware updates or new modules. Some vendors are releasing hybrid HSMs that do PQC in software inside the module. Understand the performance – a single Dilithium signature might be fast on a CPU, but if your HSM has a 1980s-era processor, it could be slow; that might throttle your transaction rates. Plan accordingly (maybe you need to cluster more HSMs or use cloud-based KMS which can scale easily).

Also look into emerging standards for post-quantum TLS and IPsec. The protocols themselves (TLS 1.3, IKEv2, etc.) don’t need fundamental changes for PQC, but there are extension drafts that specify how to encode the new algorithms. Ensure you’re up to date on those, because implementing pre-standard versions could cause interoperability trouble later. A case in point: early adopters who built prototypes with draft versions of Kyber in TLS had to update their code when the final standard changed the identifier for Kyber’s ciphersuite – and some real-world middleboxes were confused by the change, requiring adjustments. Staying aligned to the official standards as they solidify (and upgrading test environments accordingly) will minimize such pain.

In short, don’t bolt PQC onto an old, shaky PKI – fix the foundation. That means ensuring robust support for modern protocols (TLS 1.3 everywhere, for instance, since it’s easier to extend for PQC), cleaning up any hard-coded assumptions (like certificate size limits), and generally streamlining your crypto infrastructure so it’s ready to handle the new load.

5. Use Data-Centric and Layered Security Alongside PQC

Even with PQC, prudent security architecture calls for protecting sensitive data in multiple ways. One principle is data-centric security – protect the data itself in addition to protecting channels and systems. For example, if you have a database of customer records, you might encrypt the entire database with a master key (which will eventually be protected by PQC algorithms), and you might encrypt individual fields (like each customer SSN or password) with a separate key at the application level. The latter could even be a symmetric key derived from a user password or stored in a secure enclave. The idea is to layer encryption such that if one layer’s key is compromised, the inner layer still provides security.

For highly sensitive keys (like a CA private key or the keys protecting millions of encrypted records), consider splitting secrets using techniques like threshold cryptography or multi-party computation. For instance, a master key could be split into N shares using a threshold scheme, and you’d require any M of them to reconstruct. This way, there is no single point where the full key exists to be stolen or leaked – an attacker would have to compromise multiple servers or HSMs. There are PQC-compatible threshold schemes as well (NIST has discussed threshold implementations for some of the new algorithms). While threshold crypto doesn’t directly protect against quantum attacks on the math, it does protect against a host of other attacks (insider threats, malware) and adds another speed bump for adversaries.

Revisit your use of cryptographic primitives with a quantum lens. Hash functions and symmetric ciphers are still your friends – use them generously. For example, if you are concerned about the possibility that an elliptic curve could be broken by quantum, you could add an additional layer of symmetric encryption on a connection as a backup (like encrypting all data with an AES-256 session key, in addition to the normal TLS, ensuring that even if the TLS key exchange were broken later, the content remains encrypted under a one-time symmetric key). Admittedly this is unusual, but it’s conceptually simple and leverages the fact that AES-256 is believed quantum-resistant (it would take on the order of 2^128 operations for a quantum computer to brute force, which is currently viewed as infeasible).

When it comes to stored data (“data at rest”), consider prioritizing re-encryption of long-lived archives. Many organizations have tapes or backups encrypted with RSA or ECC-based keys. Those are exactly the things that, if adversaries obtained, they could hold onto and decrypt later when quantum capabilities arrive. It might be worth re-encrypting those archives now using quantum-safe symmetric techniques (again, AES-256) and storing the keys in a way that they can be updated if needed (wrapped with a PQC public key, for instance). Also, for critical data, think about freshness: applying new keys periodically (regular re-keying) so that if any one key were compromised, only a limited window of data is exposed. This is already standard practice for high security (e.g., rotate encryption keys yearly or quarterly), but quantum adds another reason: you’d rather have many short-lived keys (so an attacker has to break each one separately) than one key protecting everything for 20 years.

Hardening your key derivation functions (KDFs) and key management is another angle. Even in a post-quantum scenario, you want to ensure that if an attacker somehow gets partial information, they can’t easily derive the rest. KDFs like HKDF (HMAC-based KDF) or the newer KMAC (Keccak-based MAC/KDF) are believed to be safe against quantum attacks (Grover gives at best a quadratic advantage, which is negligible if you use sufficiently large outputs). Ensure you’re using these robust KDFs for any composite key material (like combining a classical and PQ key, or stretching a shared secret). This is actually part of the hybrid key establishment standards – they define secure combiners (often just a hash or HMAC) to merge secrets.

Finally, don’t neglect physical security and access control in the name of quantum security. Sometimes folks get so focused on the cool new quantum-safe encryption that they forget basic protections. For critical systems, layering still means having good network segmentation, so an attacker who breaches an outer network can’t directly access an inner control system that has PQC – they’d face additional hurdles like jump hosts, monitoring, etc. And if, hypothetically, a quantum computer did appear on the threat landscape, that actor likely is extremely well-resourced (think nation-state with a big budget) – they might attempt “classical” hacking to complement any cryptographic attacks. So general cyber hygiene (patching, least privilege, 24/7 monitoring) remains as crucial as ever. In other words, PQC doesn’t replace your firewall or your SOC; it adds to an overall secure design.

6. Architect for Resilience and Zero Trust

Stepping back, the ideal end-state is a system that can withstand the failure of any one component – including a cryptographic component – without catastrophic breach. This is the essence of resilience. Embracing zero-trust principles is one way to move toward that goal: assume that no single element (user, device, application) is inherently trustworthy, and continuously validate using multiple signals. How does that relate to PQC? In a zero-trust approach, even if an adversary somehow breaks one layer of encryption or impersonates one identity, they should not immediately get access to everything, because each transaction or session is verified independently (and often with context-aware policies).

For example, imagine an OT network for water treatment where operators log in to a control system. In a traditional design, if an attacker stole the operator’s VPN credentials (or broke the VPN crypto), they might then pivot freely within the control network. In a zero-trust design, even once inside, each device and action could require further authentication/authorization – perhaps the PLCs only accept commands signed with a local key or coming from a specific jump server, and even then, an anomaly detection is watching for out-of-pattern commands. The idea is to limit the blast radius of any single failure. This is especially relevant in OT where safety is concerned – you might isolate safety systems from control systems entirely, such that even a full compromise of control (via quantum or otherwise) cannot trigger dangerous actions without separate safety overrides.

Plan for crypto agility at the architecture level too. This means designing your infrastructure in a way that you can roll out changes in a segmented manner. Feature flags or toggles for “use PQC here” can help manage the transition. For instance, you might initially run your corporate VPN in a dual-stack mode: half of the gateways use classical TLS, half use hybrid PQC TLS. Monitor performance and issues. Then gradually shift the ratio. Cloud-native environments are great for this kind of canary approach. For more static environments (like a fleet of IoT devices), you might design an update mechanism that supports two algorithms in parallel, so devices can switch when both ends support the new one.

Hardware refresh is the elephant in the room for some sectors. In telecom, for example, many systems use custom ASICs/accelerators for crypto. Those will likely need upgrades to handle PQC (since PQC algorithms like ML-KEM and ML-DSA are not drop-in replacements on existing ECC/RSA accelerators). Build an inventory of hardware that may be impacted – e.g., SSL offload cards, smartcards, chipsets in vehicles, etc. Work with vendors or start budgeting to replace those in the coming years with PQC-capable versions. Some industries have already been thinking of this (the smartcard/secure element industry, for example, is prototyping PQC support knowing that digital passports and such will eventually need it). But for niche equipment, if you don’t raise the requirement, it might not happen.

And lastly, prepare for the unexpected: have a crypto incident response plan. This is a specialized form of incident response dealing with “when crypto goes wrong.” That could be a sudden break of an algorithm (like the SIKE and Rainbow events) or a major implementation vulnerability (like Heartbleed was for OpenSSL, or like if a significant flaw is found in an HSM firmware). Your plan should answer questions: How quickly can we rotate all certificates our organization uses? (Automated certificate management tools can help here.) Do we have a way to remotely update cryptographic firmware on devices in the field, if needed? If a root CA algorithm is compromised, how do we distribute a new trust anchor securely? These are tough questions, and you ideally want to have done exercises or simulations. Some organizations hold annual drills for scenarios like “our primary encryption algorithm is compromised; what do we do in 48 hours to protect critical data?” It might involve steps such as: forcing a switch to a secondary cipher in configs, re-encrypting certain databases with a symmetric-only scheme as stopgap, or even disconnecting some systems from networks until patched. While these scenarios are unlikely, the quantum era reminds us that the unthinkable can become real, and being caught unprepared could be devastating.

Real-World Case Studies: Warnings and Lessons

It’s helpful to look at a few recent cases that highlight why an integrated approach to PQ security is so important.

Case Study 1: KyberSlash timing attacks (2024). We discussed KyberSlash earlier – researchers found that many implementations of Kyber KEM were inadvertently leaking secrets through timing differences. In one demo, on a Raspberry Pi, the full Kyber secret key was recovered after just a couple of queries, exploiting the timing of a particular arithmetic operation. What’s the lesson? Even the best algorithm can be undermined by a tiny coding issue. The community responded by auditing libraries and issuing patches – essentially a massive, coordinated implementation hardening effort. Going forward, such attacks will likely appear again for other PQC schemes (attackers will look for bits of code that do branch mispredictions, cache loads, etc., based on secret data). So the takeaway is a call to arms for developers: secure coding, side-channel testing, and use of constant-time techniques are table stakes in the post-quantum era. If you implement PQC (or use libraries), don’t treat them as a black box – validate that they follow best practices. This case also showed the value of open collaboration: the issues were responsibly disclosed and fixed, avoiding a fiasco. Organizations should stay plugged into the crypto research community for early heads-up on such issues.

Case Study 2: TLS 1.3 post-quantum experiment (2023). Google Chrome and Cloudflare conducted large-scale trials of hybrid post-quantum TLS on the public internet. They enabled a hybrid X25519+Kyber key agreement for a fraction of Chrome users and Cloudflare’s servers and observed what happened. The result was mostly success – but they did encounter non-negligible failure rates due to some middleware boxes not handling the larger handshake or new cipher suite. In one report, certain middleboxes simply dropped ClientHello messages that were above a certain size threshold, causing connection failures. In another, some servers had issues with the draft vs final Kyber identifiers, requiring quick updates. The moral: protocol and PKI agility is not just a nice-to-have; it’s essential for deploying new cryptography. Organizations learned that rolling out PQC would require careful version negotiation, feature flags, and collaboration with ecosystem partners (browser makers, CDN providers, etc.). It reinforced the principle that you should expect some things to break and thus deploy gradually with the ability to fall back. After adjustments, the experiment succeeded and as of early 2024, Cloudflare noted that about 2% of all TLS connections it handles are already using post-quantum key exchange in hybrid mode. The lesson to CISOs: start testing these waters early and share results. Don’t be the last to discover that your critical firewall drops post-quantum traffic – find out in a controlled way and fix it.

Case Study 3: PQC candidate collapses (2022). When NIST announced the crack of SIKE and the attacks on Rainbow, it sent shockwaves through the community. Many people (non-cryptographers) had assumed these candidates were solid. The fact they fell so easily to classical cryptanalysis was a bit of a reality check – we might have gotten lucky that they were discovered before standardization. Imagine if Rainbow had been deployed widely and then broken; we’d have a serious scramble. The takeaway here is the value of crypto-agility and risk hedging. Organizations should anticipate that any algorithm could potentially have an unexpected weakness. Thus, designing systems to be agile (e.g., support swapping out the signature algorithm in your update mechanism) is critical. Also, the notion of hybrid again – if you had been using Rainbow in parallel with, say, an RSA signature, the break would not have compromised security because RSA was still intact. Hybrid and composite solutions, as well as maintaining a diversified algorithm portfolio, are like insurance. This case also underscores the importance of ongoing cryptanalysis: just because NIST picked some winners doesn’t mean the analysis stops. Security teams should keep an eye on new cryptanalytic results coming out of academic conferences each year related to their algorithms in use. Treat cryptography as a dynamic field, not a static set-and-forget choice.

There are other illustrative cases – for example, the evolution of hash-based signatures (which are extremely robust but have performance and size trade-offs) shows that even very secure schemes can be fragile if used incorrectly (some early deployments of stateful hash signatures ran into trouble when state management wasn’t handled perfectly, leading to accidental reuse of one-time keys). The lesson from those is that operational processes (in that case, ensuring a device never reused a signature key by proper synchronization) are part of security too.

All these real-world signals point to one conclusion: quantum-ready security is a multi-faceted challenge. We need to handle new algorithms with care, adapt our infrastructure, and prepare for surprises. Those that do so systematically will navigate the transition much more smoothly than those that treat PQC as just a box to tick.

Getting Started: A 90-Day Plan and Beyond

It’s easy to feel overwhelmed by the scope of the transition, but every journey begins with first steps. Here’s a practical short-term plan (the next 3 months or so), followed by priorities for the next 1-2 years, to kickstart your organization’s quantum-safe program:

In the Next 90 Days:

  1. Assign Ownership and Build Awareness: Identify a clear owner for PQC migration efforts. This could be an internal “Quantum Readiness Task Force” led by someone from security architecture. Have them brief senior leadership on the quantum threat and get buy-in that this is an important initiative (tie it to strategic risk management). Start training key staff on PQC basics – lunch-and-learns, sending folks to relevant courses or conferences, etc.
  2. Perform a Cryptography Inventory Blitz: Initiate an organization-wide survey of cryptographic usage. Leverage existing asset inventories and config management databases to find where protocols like TLS, SSH, IPsec, etc., are used and what libraries they rely on. Don’t forget less obvious places: embedded controllers, third-party services, partner connections. Many orgs are surprised by how much crypto underpins everything. Label each item with the algorithm (RSA-2048, ECDSA-P256, etc.) and start noting which have long-term sensitive data. For instance, systems handling personal data that must remain confidential for 10+ years should be flagged. This inventory will be the foundation for all further planning.
  3. Pilot a Hybrid Solution in a Test Environment: Pick a non-critical system or a segment of the network where you can deploy a hybrid cryptography test. A straightforward one is a test web server (or internal service) that you configure with a hybrid TLS cipher suite (if using OpenSSL, there are flags now for X25519+Kyber, for example). Have a couple of test clients (you can use a Chrome beta with the flag enabled for hybrid PQC) connect and ensure it works. Monitor the handshake with Wireshark to see the two key exchange messages. This exercise will reveal any immediate compatibility issues (does your load balancer support the new ciphers? Does the TLS inspection device pass it through?). Document any issues and possible solutions. This small win will also give your team hands-on familiarity with PQC tech.
  4. Stand up a Side-Channel Testing Lab (even if small): If you have the resources, set up a basic side-channel testing environment for your cryptographic implementations. This might be as simple as using existing tools: e.g., use the Valgrind-based test for constant-time operations on some crypto library, or try to measure timing on encryption operations across network calls. If you have hardware where crypto runs (like smartcards or TPMs), consider contacting a security testing lab to do an assessment for side-channel leakage under NDA. While 90 days is short to fully execute this, you can at least initiate the process – e.g., schedule a test or acquire tools. The goal is to embed the mindset that we verify implementations, not trust blindly.
  5. Begin a PQC Certificate/PKI Trial: In a lab environment, set up an internal Certificate Authority that issues a test post-quantum certificate (say using Dilithium). Modern opensource tools like OpenSSL are adding support for Dilithium certificates. Issue a cert for a test site or code-signing and see if you can use it (most software won’t recognize it yet, but you can use OpenSSL to verify signatures manually). Measure the certificate size – if a chain is large (maybe 30 KB for a 3-level chain with PQC), note that. This will help you understand the impact on things like TLS handshake sizes. Some experiments have shown that certain clients or middleboxes break when certificate chains exceed around 10-30 KB, so this is useful knowledge. If you find an issue (e.g., your web server can’t load a PQC cert because of software limitations), log it and maybe raise it with the vendor. This effort lays groundwork for production PKI updates later.
  6. Align Your Efforts with External Guidance: Pull together the key recommendations from NIST, CISA, ENISA, etc., that apply to you. For instance, NIST’s SP 1800-38 draft might have a checklist – see which parts you’ve done. The CISA/NSA factsheet on quantum readiness has a set of steps (inventory, roadmap, vendor engagement, etc.) – ensure your plan includes those. Essentially, validate that your initial 90-day actions are covering the basics recommended by leading authorities. This also gives you something to show management: “We are following government best practices on this.”

In the Next 12-24 Months:

Looking further out, your focus will shift to phased implementation across the enterprise:

  • Gradually Roll Out PQC in External Facing Services: Once standards and products stabilize (likely by 2024-2025 we’ll see mainstream support), deploy PQC for external connections first. For example, enable post-quantum cipher suites on your public websites, VPNs, and partner links – places where the data could be captured by adversaries today. These are also usually easier to update (since you control both ends or there’s an industry push). Monitor performance and user feedback; be ready to toggle off if issues arise, but aim to leave it on. After external services, move to internal East-West traffic (datacenter to datacenter, service to service) – though internal might be less of a priority since it’s harder for adversaries to harvest, it’s still important for completeness.
  • Adopt Composite/Hybrid Identity Solutions as Standards Emerge: Keep an eye on the IETF LAMPS working group and similar bodies. When composite certificates and hybrid signatures become standardized and supported by your vendors (say your VPN or code-signing tool allows a composite certificate with ECDSA+Dilithium), plan a migration to those. They will allow you to maintain compatibility while adding quantum safety. Make sure to maintain rollback paths – e.g., keep your classic CA alive and usable in case the new PQC CA has some unforeseen issue, until you’re confident.
  • Refresh or Augment Hardware Where Needed: Based on your earlier inventory, replace or upgrade hardware that cannot handle PQC. This might include older network appliances, older HSMs, smart cards, etc. Work this into normal tech refresh cycles to minimize cost impact. For example, if a major firewall upgrade is due in 2025, choose one that explicitly supports the new standards. If some IoT devices can’t be upgraded, consider isolating them on networks that are post-quantum protected via gateways (so the gateway does the PQC, even if the device itself remains legacy). In OT environments, this might be the only way for some very constrained devices.
  • Institutionalize Crypto-Agility in Procurement and Policy: Update your security policies to require that new systems must have configurable cryptography (not hardwired to one algorithm). In contracts with vendors, include language about responding to cryptographic breakthroughs (e.g., requiring support for new algorithms within X months of standards being available, or maintenance SLAs covering crypto updates). This ensures that as things evolve, you’re not stuck. Some organizations are even mandating SBOMs that include crypto info – so they can quickly search where, say, OpenSSL is used if a vulnerability appears.
  • Conduct Crypto Incident Drills: Use the playbooks you developed to simulate a scenario – for instance, “It’s 2028 and someone built a 5000-qubit computer that threatens RSA-2048, what do we do this week?” Treat it like a security incident and see if your team can follow the plan. You’ll likely find gaps (maybe you realize you have no mechanism to mass-revoke and reissue thousands of certificates quickly). Then refine the plan. This not only prepares you for quantum surprises but also for more mundane crypto incidents (like if a root CA is compromised or an algorithm is deprecated by policy).
  • Integrate AI-Enabled Defense Mechanisms: Since we anticipate adversaries will use AI, bolster your detection and response capabilities accordingly. This could mean deploying advanced anomaly detection (which often uses AI itself) on your networks that can catch unusual patterns that might indicate automated or AI-driven attacks. Also work on improving the provenance and authenticity checks of data – deepfakes and AI-generated phishing will test your ability to verify identities and content. This is somewhat orthogonal to PQC, but conceptually related as part of “future-proofing” security. ENISA’s threat landscape reports and frameworks like MITRE ATLAS can guide what scenarios to prepare for (e.g., data poisoning attacks against AI systems, automated vulnerability discovery, etc.). By weaving these considerations into your strategy, you avoid tunnel vision on just the quantum threat and instead uplift your overall resilience against a range of emerging threats.

Overall, the coming year or two should be about implementation and integration – taking PQC from pilot to production in a controlled, stepwise fashion, while strengthening the surrounding practices that make it effective. It’s a significant effort, but breaking it into phases like this (90-day quick wins, then annual goals) helps manage it.

Conclusion: No Silver Bullets, But a Stronger Armor

The advent of practical quantum computing will mark one of the biggest shifts in the cybersecurity landscape that any of us have seen. Post-quantum cryptography is the necessary answer to that shift – without it, the core trust of our digital systems would be broken. But as we’ve argued, PQC alone is not a panacea. It must be deployed thoughtfully, alongside other improvements, to truly achieve quantum resilience.

Think of PQC as a new pillar in your security architecture. It supports the structure, but the structure itself needs solid walls, multiple pillars, and a good design. If you simply insert a PQC algorithm into a vulnerable application, you’ve perhaps solved one problem (future-proofing the math) but you might still get hacked tomorrow via a SQL injection or a misconfigured server. Or, the PQC algorithm might be implemented incorrectly and leak keys, negating its purpose. Real security comes from layering – using defense-in-depth so that no single point of failure exists. PQC should be one of those layers moving forward, complementing strong symmetric encryption, robust access controls, network segmentation, and so on.

The organizations that will come out ahead in the quantum era are those who prepare early and holistically. They will treat the PQC transition not as a checkbox compliance task (“we use Kyber now, done”) but as an opportunity to upgrade their cryptographic posture across the board. This includes building crypto agility, so they can respond to new developments (be it a new attack on an algorithm or the emergence of an even better algorithm) with agility rather than panic. It includes investing in the security of implementations, recognizing that a chain is only as strong as its weakest link. It means modernizing PKI – an often neglected area – so that the backbone of digital identity can support the new algorithms without breaking. And it means paying attention to adjacent developments like AI in cyber offense, because the threat landscape of 2030 will be shaped by both quantum computers and AI tools (and who knows what else – perhaps new side-channel techniques, or advances in math).

In summary, PQC is a crucial transitional technology, but not a cure-all. The silver bullet mindset (“we’ll just wait for NIST to give us a new algorithm and then deploy it everywhere and we’re safe”) is risky and oversimplified. Instead, the quantum-secure future will be won by those who build systemic resilience: crypto that can evolve, systems that can withstand surprises, and security programs that assume adversaries will use every tool at their disposal (quantum, classical, AI, physical) and thus prepare on all fronts. By taking the steps outlined – from shoring up implementations to practicing agile deployment – organizations can ensure that when the quantum reckoning comes, they’ll meet it from a position of strength, not scrambling in desperation.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap