Post-Quantum

Planning the First Year of a Quantum Readiness Program

Embarking on a quantum readiness program can be daunting, so it’s helpful to break it into phases with concrete goals.

Below is a pragmatic 12-month plan (roughly divided into phases) that a CISO-led team could follow. Based on a medium-size financial services company. This assumes you’re starting from little/no quantum readiness and want to establish momentum quickly:

Phase 0 (Weeks 0-2) – Mandate and Scope Definition

Begin with governance setup and top-down communication.

In the first two weeks, charter the Crypto Steering Committee (if it doesn’t exist) with a clear mission and membership from all key units (IT, Security, Privacy, Legal, etc.)

Have the CISO or other executive sponsor issue a one-page “Crypto-Agility Policy” or memo to the organization. This document should establish management’s commitment and baseline requirements: for example, “All systems must maintain an inventory of cryptography (algorithms, keys, certificates) and support approved agile algorithms. RSA-2048 and similar will be phased out by 2030. Post-quantum algorithms should be adopted in a hybrid manner starting immediately where feasible.” It doesn’t need detailed technical specs – more of a declaration that status quo “set-and-forget crypto” is no longer acceptable.

Also, in Phase 0, define what “success” looks like: perhaps an internal goal like “By end of year, have inventory of 90% of cryptographic assets and at least 2 PQC pilot implementations running.” Finally, use this phase to identify and classify data broadly by required secrecy lifetime – publish a rubric that, for instance, personal data = 10 years, certain intellectual property = 15 years, etc., to guide prioritization. This aligns with guidance from OMB and CISA stressing to demonstrate prioritization in planning.

Phase 1 (Weeks 2-10) – Discovery and Prioritization

Kick off the cryptographic inventory efforts on two parallel tracks: top-down and bottom-up.

The top-down track means going service by service (or department by department) and listing known applications and what cryptography they use (even if just by interviewing application owners or looking at design docs). Focus on critical business services first.

The bottom-up track means running technical discovery tools: scan your network for open ports and collect TLS/SSH configurations; scrape certificate stores (internal CAs, cloud certificate managers) to gather all certificates; search code repositories for use of crypto libraries; query HSMs/KMS for keys and their attributes.

Combine these findings into your first draft Cryptography Bill of Materials (CBOM). It will be incomplete at first, but even a partial CBOM is useful. For instance, you might uncover that 70% of external-facing systems use TLS 1.2/1.3 with certain ciphers, 20% still allow TLS 1.0 (flag that), etc.

Also identify any hard deadlines (e.g., a mainframe using 8-year-old TLS might need urgent upgrade regardless of quantum).

Once inventory data is in hand, perform a risk-based ranking of systems/datasets: e.g., Tier 1: customer-facing or sensitive data with long life – upgrade by 202X; Tier 2: internal systems or short-lived data – upgrade later, etc.

At the end of Phase 1, you should have a living document or database of cryptographic assets (even if rough), and a list of “high priority targets” to address first. The output of this phase is essentially an initial CBOM and a prioritized migration backlog.

Phase 2 (Weeks 10-16) – Lab Pilot and Testing

Armed with inventory knowledge and priorities, select a non-production environment to start testing PQC in practice. A good approach is to create a controlled lab that mimics a slice of your environment. For example, set up a test web server and a test client on the same network segment, using a typical enterprise TLS setup (maybe behind a test firewall or load balancer to see full path). Then implement a hybrid TLS handshake using available tools – e.g., use OpenSSL 3.0 with the Open Quantum Safe provider to enable a cipher like TLS_AES_128_GCM_SHA256_with_X25519+Kyber768. Generate a self-signed certificate that uses a classical algorithm for now (since PQC certs might need more fiddling).

Conduct tests: measure the TLS handshake size and latency in this isolated environment. Introduce an artificial latency of, say, 100ms, to simulate WAN conditions, and see the impact of the larger handshake on connection times. Also, deliberately break the handshake (use a client that doesn’t support the hybrid ciphersuite) to see how fallback works.

At the same time, in the lab, generate some PQC key pairs and certificates: e.g., use an open-source tool or your CA in test mode to create a Dilithium certificate for the server, and then attempt to have a client validate it (likely using an OQS-enabled OpenSSL that knows Dilithium OIDs). This uncovers any parsing issues.

Start testing how your internal applications handle larger certificates – maybe configure a test TLS connection with a full chain that includes a PQC intermediate CA and measure the handshake.

Basically, this phase is about learning by doing on a small scale. Log all observations: “middlebox X in lab dropped large ClientHello – noted for vendor update” or “Dilithium cert of size 15KB was accepted by OpenSSL but our custom client crashed – must fix parser”. Work out kinks here rather than in prod.

By end of Phase 2, you ideally have a working demo: e.g., a browser connecting to a test site with a hybrid TLS and maybe even a PQC test certificate, with metrics collected. This proves feasibility and builds confidence. It’s exactly what pioneers like Cloudflare did (they enabled PQC on test servers and gathered data).

Phase 3 (Months 4-6) – PKI and HSM Modernization

Now that you have empirical data and a clearer idea of what you need, focus on upgrading the backbone: your PKI and key management systems.

In this phase, the PKI team updates the Certificate Authority infrastructure: extend the schema to allow PQC algorithms. This might involve software upgrades (ensure your CA software supports larger keys or new OIDs – if not, plan a replacement or workaround).

Draft new Certificate Policy/Practice Statement sections for PQC (even if just internal for now): specify how you’ll issue, how you’ll name algorithm identifiers, etc.

Possibly set up a parallel CA hierarchy: e.g., create a new offline root (or use existing one if it’s agile enough) to issue an intermediate CA that will be used for PQC pilots. That intermediate might be a hybrid itself or just classical but dedicated to signing PQC leaf certs using an extension.

At the same time, upgrade HSM firmware in a non-prod environment if available – test generating and storing PQC keys on the HSM. If your current HSMs don’t support PQC yet, liaise with the vendor on timeline and consider using a software alternative (for test) like the Utimaco simulator in the interim.

Also, check key management policies: update key length requirements (e.g., you might say “RSA 2048 is deprecated, RSA 3072 allowed until 2030, PQC (Dilithium level 2 or higher) required for new systems by 2025” – whatever fits your risk).

If you have an enterprise KMS (like AWS KMS or Azure Vault usage), explore their PQC offerings. Ensure your usage of such services can be configured for PQC if needed.

Plan key migration: identify a couple of long-lived keys (say a root certificate, or a database encryption master key) and draft how you would re-generate or wrap them with PQC.

Maybe do a dry run: create a copy of a database, encrypt it with a Kyber-wrapped AES key using your tools to see if any code breaks.

By the end of Phase 3, you should have the capability to issue PQC credentials (certs or keys) in a test capacity and a concrete plan for each major PKI/HSM component’s upgrade timeline. You’re essentially prepping the “supply” side of crypto – making sure the infrastructure can dish out PQC material by the time the “demand” (applications wanting it) increases. This phase also involves heavy documentation and approval steps.

Phase 4 (Months 6-12) – Controlled Production Rollouts

Now the focus shifts to applying all this groundwork to real systems in a safe manner.

Identify at least two production candidate implementations to be your first quantum-safe deployments:

  1. one should be low risk, high reward – a scenario where you get significant security benefit with minimal user impact;
  2. another can be a bit more ambitious to pave the way for future rollouts.

A good (a) is code signing or firmware signing: If you produce software or firmware, start signing one product’s releases with a hybrid certificate (classical + PQC signature, or a dual-signature scheme). This usually doesn’t affect runtime performance and only needs the verifying end (e.g., software update checker) to trust the signature. You likely will run this as a shadow process first (sign with new algorithm in addition to old, but still enforce the old signature until all verifiers are updated to recognize the new one). But it gets the ball rolling.

A good (b) is an internal TLS connection that you control end-to-end – for example, the connection between a front-end web server and a backend API or database within your data center, or a VPN for IT admins. Enable a hybrid KEM on that link (keeping the classical as backup). Monitor it closely. Because it’s contained, if something fails you can revert quickly without customer impact. Use this as a “canary” – gather telemetry: no increase in error rates or latency? Great, maybe expand pilot. Or if an issue arises, you learned something.

Over these months, gradually expand the scope: perhaps allow PQC cipher suites on an outward-facing test site for a subset of users (maybe via a feature flag or specific domain). The key is staged rollout: use load balancers to direct a small percentage of traffic through a PQC-enabled path and compare metrics.

Also, ensure you have a rollback plan at every step (for TLS, be ready to remove the PQC ciphers; for code signing, you’d still have the classical signature present so that’s inherently rollback-able).

Simultaneously, start enforcing some policy gates: for instance, by month 12 you could mandate that any new system going live must use TLS 1.3 (no legacy protocols) and support crypto agility, and have that checked in deployment reviews. Introduce automated checks – e.g., in CI/CD, if a container is about to be deployed and its SBOM/CBOM shows it’s using a disallowed algorithm, block the deployment (you might implement this with a script that checks SBOM components for things like OpenSSL versions or known bad algos).

By the end of Phase 4 (month 12), you should be able to point to a couple of real services in production that are quantum-resistant in at least one dimension (e.g., using hybrid key exchange), and have processes in place ensuring new development is crypto-agile by default. You’ll also have a trove of monitoring data and playbooks from these rollouts.


By following these phases, you’ve achieved a lot in one year: governance is in place, you know your crypto assets, your infrastructure is updated, and you’ve even started protecting some critical assets with PQC. The subsequent years will involve expanding and refining this – migrating more and more systems per the priority, phasing out old algorithms as standards or regulators dictate, and continuously monitoring for new developments (like new algorithms from NIST or new threats).

This phased approach is also flexible – if your organization is smaller or larger, you can adjust timeline or parallelize more. The important part is to have concrete goals each step of the way and not try to do everything at once.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap