Post-Quantum

Common Failures in a Quantum Readiness Program

Even well-run quantum readiness programs can stumble. Here are some common pitfalls in crypto-agility/PQC efforts and how to avoid them:

Treating PQC as a simple library or drop-in swap

Perhaps the biggest mistake is underestimating the ecosystem changes required. Simply implementing a PQC algorithm in code but ignoring the surrounding systems (PKI, certificates, protocols) is a recipe for trouble.

For example, you might add a Dilithium-based signature to your handshake but haven’t updated the certificate format – clients or CAs reject it. Or a developer uses a PQC library but doesn’t realize the output doesn’t fit in an existing database field.

Solution: Take a holistic approach. Follow the IETF LAMPS drafts for how to do certificates properly with PQC. Do end-to-end testing (CA->cert->client verify). Update certificate validation logic to accept the new OIDs. Plan how revocation will work (bigger CRLs? use OCSP stapling more?).

Essentially, don’t assume things that worked with RSA/ECDSA will automatically work with PQC – revalidate each step (as our testing phase suggests). This also means involving the PKI team from the get-go, not just the app devs.

Ignoring handshake size and middlebox issues

As noted earlier, the increased size of PQC handshakes (especially if using both PQC key exchange and PQC certs naively) can trigger issues in the network. We’ve seen real cases: Chrome’s deployment showed a 4% median latency bump and uncovered that many middleboxes had hard assumptions that a TLS ClientHello fits in one packet. Some organizations might not monitor that and suddenly see random connection failures or slowdowns, and scramble without knowing why.

Solution: Always measure handshake sizes after adding PQC. If they approach typical MTU limits (~1500 bytes), expect fragmentation. Conduct compatibility tests with representative middleboxes (older firewalls, IDS, etc.). Use gradual rollouts to catch if, say, a certain office’s proxy breaks all PQC-enabled traffic – then implement fixes (maybe patch or bypass that proxy) before full deployment.

Also consider enabling Connection reuse / session resumption more aggressively to amortize handshake overhead (e.g., increase TLS session ticket lifetime if applicable, so clients reconnect without full handshake).

The point: anticipate that network plumbing might need updates (maybe firmware fixes from vendors). Over-communicate with network teams about these needs.

HSM/KMS false assumptions

Not all cryptographic hardware or services are ready for PQC even if documentation implies so. For instance, an HSM might claim support but in practice maybe only in software inside HSM (slower) or with certain limitations (like “key cannot be extracted or wrapped”). If you assume you can just generate a Kyber key on your existing HSM and back it up as usual, you might be surprised. Or a cloud KMS might support hybrid TLS on their endpoints but not provide PQC key storage for you yet.

Solution: Work closely with vendors early. Verify features through POCs – e.g., actually try to generate a PQC key on an HSM with a real workflow (not just a vendor demo). Read the release notes of HSM firmware  – you might read in the notes, for example, that ML-KEM keys are non-wrappable by design  (meaning you can’t export them encrypted as you might with RSA keys for backup). That requires adjusting your key backup strategy (maybe keep them only in HSM clusters, etc.) So updating procedures to match the reality of the tool is crucial.

Similarly, check API support: your PKCS#11 code might need new mechanism IDs for PQC; ensure your middleware (like Venafi, Keyfactor, etc., if you use them for certificate automation) can handle those or have updates planned. Confirm timeline for these updates, otherwise your automation might break.

In short, don’t believe it until you see it when it comes to vendor support – test and verify, and if something’s missing, pressure the vendor or find a workaround until they deliver.

Shallow or incomplete inventories

Declaring “we did an inventory” but missing entire segments of your environment is dangerous because it gives a false sense of security.

Common misses: embedded systems (e.g., badge readers with hardcoded crypto), Operational Technology (like SCADA devices using proprietary crypto or legacy SSL), mainframes (often have their own TLS implementations), and third-party cloud services (maybe you didn’t think to ask what encryption your SaaS provider uses for your data – if they store long-lived data with classical encryption, that’s your exposure too). Even within IT, things like scripts with GPG encryption or database TDE (transparent data encryption) settings might be overlooked if you only scanned network ports.

Solution: Use multiple discovery methods (as we planned). Combine automated scans with manual outreach: send a survey to all system owners asking what crypto their systems use. Cross-validate: if your CBOM shows zero mention of a certain algorithm but you know a certain vendor’s product uses it, investigate why it’s not showing. Maintain the inventory as an evergreen process, not one-time. Use the CBOM format to systematically include fields for everything relevant. Whenever something is found later (and something will be), don’t just fix that one case – update the discovery process to catch similar cases. For example, if you realized IoT devices were missed, incorporate an IoT discovery step (maybe scanning Wi-Fi networks for certain TLS handshakes or polling device management systems for firmware info). Also incorporate SBOM data from vendors to enrich your CBOM – that can reveal embedded crypto you can’t easily scan for (like inside an appliance).

The motto: assume something is hiding and seek it out proactively.


In addition to the above, another failure mode (less technical, more organizational) is lack of clear ownership – when it’s everyone’s problem, sometimes no one drives it. We mitigate that by explicitly naming a Program Lead and roles as we did.

By anticipating these failure modes, you can put early safeguards. It often helps to openly discuss these pitfalls in the steering committee – like “We know middleboxes could be trouble, so network team be ready,” or “We’ll treat inventory as done only when verified by multiple methods.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap