Ready for Quantum: Practical Steps for Cybersecurity Teams
Table of Contents
(Updated March 2026)
1. Introduction
This article has been substantially updated from its original 2024 publication to align with the Applied Quantum PQC Migration Framework v1.1 and the significant regulatory, standards, and threat landscape developments since then. The core advice remains the same: start now, start practical, and treat this as a multi-year program. What has changed is the specificity of what “start now” looks like — and the regulatory deadlines that have transformed this from a forward-looking exercise into an urgent compliance imperative.
In cybersecurity these days it is impossible to escape the noise about quantum computing. From government agencies to industry organizations and tech bloggers, everyone seems to be weighing in on how organizations should prepare for the expected arrival of Cryptanalytically Relevant Quantum Computers (CRQC), or Q-Day — the day when quantum computing is expected to break our current cryptographic defenses.
For a grounded perspective on when Q-Day might actually occur and what you should monitor to decide for yourself, check out “Q-Day Predictions: Anticipating the Arrival of Cryptanalytically Relevant Quantum Computers (CRQC).” But Q-Day as a single event is, in many ways, a misleading framing. As I have argued in “Q-Day Isn’t an Outage — It’s a Confidence Crisis,” the real threat is not a single dramatic day of reckoning but a gradual erosion of trust in the cryptographic foundations of digital infrastructure.
Since the original publication of this article, NIST has finalized its first post-quantum cryptography standards: ML-KEM (FIPS 203, formerly CRYSTALS-Kyber) for key encapsulation, ML-DSA (FIPS 204, formerly CRYSTALS-Dilithium) for digital signatures, and SLH-DSA (FIPS 205, formerly SPHINCS+) for hash-based signatures. Additional algorithms — FN-DSA (formerly FALCON) and HQC — are expected to follow. These standards represent years of rigorous analysis and testing, and they have triggered a wave of regulatory activity that has transformed the urgency landscape entirely.
Yet despite near-universal awareness among cybersecurity professionals, the gap between awareness and meaningful action has barely narrowed. Every CISO I speak with now knows that quantum computing threatens their cryptographic infrastructure. What most have not done is fund a program, appoint a program manager, deploy discovery tools on their top-20 systems, or send a single PQC roadmap questionnaire to their strategic vendors.
The advice they have received — “increase awareness,” “perform risk assessment,” “make an inventory of your cryptography,” “implement PQC” — remains frustratingly vague about the how. Although the topic of quantum technology is undeniably intriguing, from the practical perspective, the biggest challenges lie in executing thorough inventories of cryptography, sensitive data, and critical systems; mapping system dependencies and especially cryptographic interdependencies; understanding the cryptographic performance and scale requirements; and devising a realistic, risk-driven, optimal plan for quantum risk mitigation aligned with the organization’s risk tolerance. These tasks demand the most time, effort, and expertise, yet they are often overlooked in industry publications.
In this article, I aim to address this gap by providing holistic, clear, actionable steps — grounded in the Applied Quantum PQC Migration Framework, a practitioner methodology born from real-world programs including one that generated over 120,000 discrete migration tasks. The guidance here is drawn from the experience of leading PQC migrations for telecoms, financial institutions, and critical infrastructure operators — organizations where cryptographic failure has consequences measured in regulatory fines, service outages, and physical safety risks.
2. Practical Reasons for Preparing Now
If you’re involved in cybersecurity, you’re likely aware of the potential risks quantum computing poses. For a refresher, consider reviewing “What’s the Deal with Quantum Computing: Simple Introduction” and “Harvest Now, Decrypt Later (HNDL)” to understand why addressing the future risk of CRQC is crucial today.
Whether CRQCs arrive in 7–8 years as I have predicted, or in 15 years as many in the field expect, there are practical, grounded reasons organizations should start preparing now. Since the original publication of this article, several of these reasons have become dramatically more concrete.
2.1. Regulatory and Compliance Deadlines Are Now Set
The regulatory landscape has transformed from vague guidance into binding timelines. NIST’s IR 8547 (Initial Public Draft, November 2024) proposes deprecating all quantum-vulnerable public-key algorithms after 2030 and disallowing them entirely after 2035. NSA’s CNSA 2.0 mandates that new national security system acquisitions must be CNSA 2.0 compliant by 2027, with software and firmware signing using PQC by 2030 and all national security systems fully migrated by 2035. The UK’s NCSC requires discovery and planning complete by 2028, high-priority migration by 2031, and full migration by 2035. Australia’s ASD has set 2030 for ceasing traditional asymmetric cryptography in government systems. Canada’s CCCS requires departmental PQC migration plans by April 2026. The EU Coordinated Roadmap calls for first steps — awareness, inventories, pilots — by end of 2026.
For organizations in regulated industries, PCI DSS v4.0 Requirement 12.3.3 already requires documentation and annual review of cryptographic cipher suites and protocols in use — effectively mandating the kind of cryptographic inventory that forms the foundation of any PQC migration program.
The implication: organizations starting now have somewhere between four and nine years depending on jurisdiction and system criticality. A credibly planned migration for a large enterprise requires 4 to 15 years of execution. There is no slack in the schedule.
2.2. The Complexity of Transition
Transitioning to quantum-resistant cryptography isn’t as simple as flipping a switch, despite what some vendors might claim. It involves evaluating current cryptographic uses, understanding the quantum threat specific to those uses, and then implementing new protocols deemed secure against quantum attacks. Whole protocols and services will need to be re-engineered, because PQC typically places greater demands on devices and networks than traditional public-key cryptography. Sometimes it requires replacements of entire core critical systems that would require massive transformation programs lasting many years. This process is time-consuming and complex, requiring a gradual approach to manage risks effectively without disrupting existing operations.
This is a good moment to discuss Mosca’s theorem. Named after Dr. Michele Mosca, a prominent researcher in quantum computing, this theorem indicates when to start transitioning to quantum-safe cryptography. The theorem states that if there is a non-negligible chance that the total time required to keep data secure, plus the time needed to upgrade cryptographic systems, exceeds the time until quantum computers capable of breaking current cryptography become operational, then you are already too late. For more, see “Mosca’s Theorem and Post-Quantum Readiness.”
2.3. The Longevity of Data (HNDL Threat)
The “Harvest Now, Decrypt Later” (HNDL) scenario is a genuine and present concern. Adversaries can capture encrypted data today, store it, and wait until quantum computers are capable of breaking the encryption. For data that must remain confidential for decades — trade secrets, intelligence, medical records, financial data — the window for safely transitioning may already be closed.
2.4. The Integrity Threat (TNFL)
Less discussed but equally consequential is Trust Now, Forge Later (TNFL) — the integrity-side quantum threat. When CRQCs become available, adversaries will be able to forge digital signatures, compromise code-signing chains, and undermine the trust anchors that hold digital infrastructure together. For critical infrastructure operators, this means the signatures protecting safety-critical firmware could become forgeable. For financial institutions, it means the digital signatures on transactions and contracts could be repudiated. TNFL deserves at least equal attention to HNDL in your quantum risk assessment.
2.5. The Longevity of Digital Infrastructure
For organizations deploying long-lived digital infrastructures — critical national infrastructure, industrial control systems, long-term data storage solutions — the expected lifespan of these systems may exceed the anticipated arrival of CRQCs. Implementing quantum-resistant solutions now is critical to ensure these systems remain secure throughout their operational life. At minimum, implement them today in a way that would allow an easy swap of cryptographic modules in the future — the concept of crypto-agility, which I will discuss later.
2.6. Litigation and Liability Exposure
As PQC standards become established and regulatory expectations crystallize, failure to migrate creates increasing legal exposure. The first lawsuits citing failure to implement available quantum-safe protections are a matter of when, not if. Insurance companies are already beginning to ask about quantum readiness in their underwriting processes.
2.7. Enhancing Overall Cybersecurity Maturity
Preparing for Q-Day will require thorough audits, inventories of sensitive data, inventories of all cryptographic solutions, and updates to cybersecurity policies and systems. These efforts have many other cybersecurity and privacy benefits besides preparing for Q-Day. Cryptographic inventories routinely uncover classically vulnerable configurations — deprecated TLS versions still in production, RSA-1024 keys, expired certificates, hardcoded keys in application code — that represent real, present-day risk. This makes the inventory investment self-funding: it pays for itself in reduced classical risk before the quantum migration even begins. I encourage cybersecurity leaders to frame PQC migration as the umbrella program that simultaneously addresses quantum risk, classical cryptographic hygiene, and architectural modernization. Organizations that position it this way consistently secure larger budgets and broader executive support.
2.8. Competitive Advantage
By the late 2020s, demonstrating quantum resilience will become a market signal of forward-thinking security. Government and enterprise procurement increasingly requires demonstrated quantum readiness. Organizations that can respond to “Are you quantum-ready?” with documented evidence gain competitive advantage in sales cycles, particularly in financial services, healthcare, defense, and critical infrastructure sectors.
2.9. Cybersecurity Talent Attraction
In a market with a shortage of skilled cybersecurity professionals, organizations that prioritize security and engage with frontier challenges attract and retain the most qualified candidates. PQC expertise is particularly scarce — see “The Skill Stack a CISO Needs for Crypto-Agility and Quantum Readiness.”
3. Challenges with Post-Quantum Cryptography (PQC)
One common misconception I frequently observe among my clients is the belief that now that NIST has released its PQC standards, implementing these new solutions will be simple and straightforward. Unfortunately, the reality is far more complex. The transition to PQC is not a plug-and-play solution; it involves a myriad of intricate challenges that organizations must navigate. For more on PQC challenges see “Post-Quantum Cryptography PQC Challenges.”
3.1. Algorithm Maturity and Ongoing Evolution
While NIST has finalized ML-KEM (FIPS 203), ML-DSA (FIPS 204), and SLH-DSA (FIPS 205), the PQC landscape continues to evolve. Additional algorithms like FN-DSA (formerly FALCON) and HQC are expected to be standardized. Selected algorithms may still require parameter adjustments as our understanding of quantum computing and cryptanalysis deepens. Organizations must be prepared for PQC algorithms implemented today to require updates or replacements in the future, emphasizing the need for crypto-agility. For more on crypto-agility, see “Introduction to Crypto-Agility.”
3.2. Performance Challenges
One of the critical challenges that organizations often overlook is the significant increase in computational demands. ML-KEM public keys and ciphertexts are significantly larger than their RSA or ECDH equivalents. ML-DSA signatures are roughly 40 to 80 times larger than ECDSA signatures, depending on the parameter set. These larger cryptographic objects affect TLS handshake sizes (potentially causing middlebox breakage and packet fragmentation), certificate chain transmission, and storage requirements.
The cumulative impact of deploying PQC across an entire infrastructure can be substantial. Increased key sizes and computational requirements consume more bandwidth and introduce latency in network communications. This is particularly impactful for IoT devices and other resource-constrained systems. Failure to account for these factors can lead to significant slowdowns, increased operational costs, and potentially compromised security if performance issues cause organizations to revert to less secure methods.
Addressing these performance challenges requires strategic approaches such as using hybrid cryptographic schemes that combine classical algorithms with PQC algorithms, improving the efficiency of PQC implementations through code optimization and hardware acceleration, and conducting extensive performance testing in actual operating environments. An incremental approach to deployment, gradually introducing PQC algorithms, allows for monitoring performance and making necessary adjustments.
3.3. Implementation Complexity
Adopting PQC algorithms requires significant changes to existing cryptographic libraries and protocols, which are deeply integrated into infrastructure. Transitioning involves rewriting libraries, modifying protocols, and optimizing performance, demanding substantial code changes and rigorous testing. A phased approach, starting with pilot projects, and using hybrid cryptographic schemes can help manage these challenges.
Ensuring backward compatibility with existing systems adds another layer of complexity. Many applications and protocols are tailored to specific cryptographic algorithms, making it difficult to introduce new ones without disruption. Implementing PQC in a layered manner and using intermediary solutions like gateways can bridge the gap between classical and quantum-resistant algorithms.
3.4. Compliance and Regulatory Challenges
The regulatory landscape for PQC is still maturing. While NIST has finalized its algorithm standards, the broader regulatory frameworks — how different regulators will mandate migration timelines, what evidence of progress they will require, how they will handle interim periods — are still evolving. Organizations must monitor updates from regulatory bodies, participate in industry forums, and adopt flexible, crypto-agile systems that can be updated as new standards and requirements emerge.
3.5. Cost
Transitioning to PQC involves substantial financial investments across various domains, including new technologies, training, and infrastructure upgrades. This can be daunting for organizations with extensive cryptographic infrastructures. HSMs, PKI platforms, and other cryptographic hardware may require firmware updates or outright replacement. Training cryptographic engineers and security architects in PQC deployment is an investment that many organizations have not yet factored into their budgets.
To mitigate costs, consider aligning PQC migration with existing hardware refresh cycles, cloud migrations, and vendor contract renewals — piggy-backing on already-funded infrastructure changes is the most cost-effective approach. Adopting a phased approach, starting with pilot projects, allows organizations to spread out costs and gain practical insights.
4. What You Shouldn’t Do
Before getting into the specifics of how to prepare, it’s important to understand certain pitfalls to avoid. Based on my experience in the industry, some of the most common ones include:
4.1. Don’t Frame It as an IT Project
The single most common failure mode is treating PQC migration as a technology project rather than an enterprise transformation program. Organizations that frame it as a project get project-level funding (one year, limited scope) and project-level authority (a mid-level manager who cannot convene business units or negotiate vendor contracts). PQC migration will touch every application, every integration, every vendor relationship, and every data store that relies on public-key cryptography. It needs a multi-year funded program with executive sponsorship and cross-functional governance. Programs that skip or underinvest in securing the executive mandate stall within 6–12 months when they hit their first resource conflict or political obstacle.
4.2. Don’t Delegate to Vendors
Assuming “our vendors will sort this out” is the single most dangerous misconception. Vendors will update their products on their own timelines, optimizing for their own priorities. Without an internal program driving requirements, tracking commitments, and testing deployments, the organization has no control over its migration timeline. In most programs I have led, the longest segment of the critical path runs through vendor product GA dates — not internal execution capacity. Vendor governance is not a nice-to-have; it is a program-critical workstream that must start in the first quarter.
4.3. Don’t Rush to Lock Down Systems
Reacting hastily to doom-and-gloom articles or regulatory inquiries might lead to premature “locking down” of systems to demonstrate seriousness. Such knee-jerk reactions can disrupt day-to-day operations, especially if access to data is excessively restricted due to HNDL fears. While it’s crucial to prepare, the disruption of existing cryptographic systems by quantum capabilities isn’t immediate. By starting your preparations now as a structured program, you can proceed in a calm and coordinated manner.
4.4. Don’t Wait for Perfection
You will never have a complete cryptographic inventory. You will never have perfect risk scores for every system. You will never have every vendor roadmap confirmed. The organizations that make meaningful progress embrace the principle of progressive refinement — start with what you can discover quickly, score what you can, and begin migrating the highest-priority systems while continuing discovery on the rest. The Framework’s phases are logically sequential but operationally overlapping: you will be running discovery on some systems while executing pilots on others.
5. What You Should Do
The steps below follow the Applied Quantum PQC Migration Framework’s 8-phase lifecycle, adapted as practical guidance for cybersecurity teams. They are not a substitute for the full methodology — they are a starting map for teams that need to know what to do next Monday.
5.1. Secure the Executive Mandate and Multi-Year Funding
Nothing happens without budget, authority, and organizational commitment. Securing support from senior leadership is the critical first step in transitioning to post-quantum cryptography — and the most common failure point.
To gain senior leadership support, it’s essential to clearly articulate the urgency. In my experience, the most successful programs frame the business case around four urgency drivers that are concrete and current — not speculative Q-Day predictions:
Regulatory and compliance deadlines. The timelines described in Section 2.1 are real and approaching. Non-compliance carries financial and legal consequences. Lead with these when presenting to the board.
HNDL and TNFL exposure for sensitive data. For organizations handling data with multi-decade confidentiality or integrity requirements, these risks are already material. Quantify this exposure for your specific data assets.
Litigation and liability exposure. As PQC standards become established, failure to migrate creates increasing legal exposure. Frame this as risk management.
The umbrella benefit. Demonstrate that the cryptographic inventory alone will surface and enable remediation of current classical vulnerabilities — deprecated TLS versions, weak keys, expired certificates, hardcoded credentials — that represent real, present-day risk. This makes the program self-funding before the quantum migration even begins.
Develop a compelling business case that includes cost-benefit analyses, potential risks of inaction, and projected return on investment. Show that proactive measures are not just a cost but an investment in the organization’s future stability and compliance posture.
5.1.1. Practical Steps for Securing the Executive Mandate
- Develop a Board-Ready Briefing: Prepare a presentation that outlines the threats posed by quantum computing, the regulatory deadlines now on the calendar, the four urgency drivers, and the benefits of the umbrella program approach. Use real-world data: the 120,000-task reference from a large-scale migration, the specific regulatory timelines for your jurisdiction, and the umbrella benefits that make the investment self-funding. See “What is the Quantum Threat? A Guide for C-Suite Executives and Boards.”
- Secure an Executive Sponsor: This must be the CISO, CIO, or equivalent — someone with the organizational standing to secure multi-year budget commitments and cross-functional authority. A mid-level security manager cannot drive this program. The sponsor must be able to convene business unit leaders, negotiate with procurement, and report to the board.
- Build the Budget Structure for Multi-Year Commitment: A single-year budget is a recipe for failure. PQC migration is a 4–15 year program. Structure the budget to piggyback on existing hardware refresh cycles, cloud migrations, and vendor contract renewals. This is the most cost-effective approach and makes the program more politically sustainable. The budget should include tool procurement, staffing, training, and external expertise.
- Establish Governance Structure: The governance model that works in practice has four layers:
- Executive Sponsor (CISO or CIO): Visible owner; clears roadblocks; briefs the board quarterly.
- Steering Committee (SteerCo): Cross-functional representation from Security, Enterprise Architecture, AppDev, Infrastructure/NetSec, PKI/Identity, Compliance/Legal, Procurement, and Business Units. Meets monthly. Must have decision authority over budget, timelines, risk acceptance, and vendor escalation — not just advisory status.
- Quantum Readiness Program Manager (QRPM): Day-to-day leader; runs the plan, risk log, and KPIs; coordinates workstreams. Reports weekly to SteerCo lead.
- Workstream Leads (one per domain) covering: Inventory & Discovery, Network & TLS/VPN, PKI & Code Signing, Applications & Platforms, Embedded/IoT/OT, Policy/Compliance/Procurement, Vendor Orchestration, and Education & Change Management.
- Draft a Program Charter: A one-page charter covering purpose, scope, success criteria, decision cadence (weekly PMO, monthly SteerCo, quarterly board), and escalation path.
- Conduct an Initial Scoping Assessment: Before launching full discovery, spend 2–4 weeks identifying your top 20 revenue-generating or mission-critical systems, the primary cryptographic protocols each uses, and the 5–10 vendors whose PQC readiness will most constrain your timeline. This calibrates Year 1 planning and strengthens the business case with concrete data.
5.2. Establish a Cross-Functional Team for Quantum Readiness
The foundational step once the mandate is secured is to assemble the cross-functional team described in the governance structure above. This team will be pivotal in navigating the technical, regulatory, and strategic challenges associated with transitioning to quantum-safe cryptographic systems.
Quantum readiness impacts multiple areas of the organization, from IT and cybersecurity to legal, compliance, and business operations. A cross-functional team brings together diverse expertise, ensuring comprehensive coverage. The team must be empowered with the authority to make decisions and implement changes across the organization without being hindered by bureaucratic obstacles.
Designate “crypto champions” — one per platform or application team — who serve as the workstream’s liaison and knowledge bridge to the broader engineering community. This approach scales expertise across the organization far more effectively than centralizing all cryptographic knowledge in a small team.
5.2.1. Practical Steps to Establish the Team
- Define Scope and Objectives: Clearly outline the scope of the quantum readiness program, identifying specific goals (cryptographic inventory completion, hybrid pilot deployments, vendor governance establishment), deliverables, and timelines. Set measurable objectives such as: 70% Tier-1 CBOM coverage within 6 months, two hybrid pilots deployed within 9 months, top 10 vendors assessed within 12 months.
- Select the Right Members: Include representatives from key departments. Ensure that the team includes subject matter experts in cryptography, PKI architecture, and relevant regulatory frameworks. Consider hiring or contracting specialized PQC expertise — this skillset is scarce and getting scarcer.
- Provide Training: Invest in a tiered training approach: executive education (1 day) for SteerCo and senior leadership; PQC foundations (3–5 days) covering algorithm overview, hybrid deployment, CBOM, and risk assessment for all workstream participants; deep technical training (ongoing) with hands-on lab exercises for security engineers and architects.
- Develop a Detailed Action Plan: Create a detailed project plan with specific steps, timelines, milestones, and risk mitigation strategies. Use the 90-Day Quick Start template from the Framework (see Section 7 below) as the foundation.
5.3. Launch an Awareness Campaign on Quantum Computing
Once the project team is set up, launch an enterprise-wide awareness campaign. While quantum awareness has improved significantly since 2024, many organizations still suffer from misunderstandings that can derail the program — particularly the belief that “this is a vendor problem” or that migration will be a straightforward algorithm swap.
Balance the Narrative: Avoid focusing solely on fear, uncertainty, and doubt. While it’s important to discuss the challenges, it’s equally important to highlight the umbrella benefits — the cybersecurity improvements, compliance artifacts, and operational efficiencies the program will deliver regardless of when CRQCs arrive.
Tailor Communication Strategies: Develop targeted messages for different stakeholder groups:
- Board and Executive Management: Focus on regulatory deadlines, litigation risk, and the program’s self-funding nature through classical vulnerability discovery.
- IT and Security Teams: Provide detailed information on the NIST-standardized algorithms (ML-KEM, ML-DSA, SLH-DSA), the hybrid deployment approach, and the practical challenges of migration (handshake sizes, HSM constraints, middlebox breakage).
- Development Teams: Emphasize the crypto-agility architecture principles — no direct algorithm calls in application code, algorithm selection driven by configuration, automated certificate lifecycle management — and how these will be integrated into CI/CD pipelines.
- General Staff: Create accessible materials explaining the basics and the broader implications for the organization.
Involve External Experts: Given the complexity of quantum computing, engaging external experts for training sessions and insights remains valuable. There have been instances where major companies tasked internal non-experts with preparing awareness materials, resulting in confusion and conflation of quantum computing with unrelated topics such as quantum mind theories.
Promote a Culture of Continuous Learning: Encourage ongoing curiosity about emerging technologies. Offer incentives for employees to participate in training programs and stay informed about the latest developments in quantum computing and cybersecurity.
5.3.1. Practical Steps to Launch an Awareness Campaign
- Define Objectives and Audience: Clearly define the goals — educating employees about quantum computing fundamentals, explaining risks and opportunities, and preparing the organization for the transition. Segment the audience: executives, IT staff, security teams, development teams, and general employees. Tailor messages to each group’s specific concerns and interests.
- Develop Educational Content: Develop a range of materials including whitepapers, articles, infographics, videos, and presentations. Ensure the content covers: basics of quantum computing and how it differs from classical computing; the HNDL and TNFL threats in concrete business terms; the NIST-standardized PQC algorithms (ML-KEM, ML-DSA, SLH-DSA) and what they mean for the organization; and the regulatory deadlines now on the calendar. Collaborate with quantum computing experts and academic institutions to ensure accuracy. Use analogies and simplified explanations to make complex concepts accessible to non-technical audiences.
- Tailor Communication Strategies: Provide high-level briefings for executives focusing on regulatory deadlines, litigation risk, and the umbrella program benefits. Conduct detailed technical workshops for IT and security teams explaining hybrid deployment, CBOM construction, and infrastructure challenges. Organize interactive sessions for general staff using engaging formats such as quizzes and interactive webinars.
- Utilize Multiple Communication Channels: Publish articles and updates on the company intranet and through internal newsletters. Send targeted email campaigns with key messages. Host live webinars and workshops with Q&A sessions. Use posters and digital signage in common areas to reinforce key messages and maintain campaign visibility.
- Engage External Experts: Host guest speakers from academia, industry, and cybersecurity organizations to provide authoritative insights. Engage with industry groups focused on quantum computing to share knowledge and best practices.
- Monitor and Evaluate Effectiveness: Implement feedback mechanisms such as surveys and polls to gauge the campaign’s effectiveness. Monitor participation rates in webinars, workshops, and training activities. Use feedback and engagement data to refine the campaign.
- Maintain Momentum: Provide regular updates on quantum computing advancements and organizational plans. Offer ongoing learning opportunities such as advanced workshops and certification programs. Periodically reinforce key messages to keep quantum readiness top of mind.
5.4. Engage External Parties for Knowledge Sharing and Collaboration
Quantum computing and PQC represent fields where the pace of development requires active engagement with the external ecosystem.
Stay Informed on Standards Development: Monitor updates from NIST (including IR 8547 and SP 1800-38), NSA/CNSA 2.0, ETSI, ENISA, and ISO. Subscribe to newsletters, attend webinars, and participate in public comment periods.
Collaborate with National Cybersecurity Agencies: Establish liaison relationships with your local national cybersecurity agencies (NCSC UK, CISA, BSI, ANSSI, ACSC, etc.). These agencies are increasingly publishing sector-specific quantum readiness guidance and can provide valuable insights into governmental priorities.
Join Industry Groups and Consortia: Groups focused on quantum-safe security — such as the ETSI Quantum-Safe Cryptography Working Group, the PQCC, the PKI Consortium PQCMM, the GSMA Post-Quantum Telco Network Task Force, and sector-specific ISACs — provide access to collaborative opportunities, shared best practices, and early visibility into emerging requirements.
5.5. Prepare Your Third Parties for the Arrival of CRQC
The reality is that the majority of companies will not start preparing in a timely manner. If your organization begins its preparations now, a significant portion of your risk exposure will come through your third parties — providers of services, software, and infrastructure that are deeply embedded and interlinked with your systems. This is important enough that the Framework dedicates an entire phase (Phase 7) to vendor and supply chain governance.
At this point in the process, you should inform all your relevant partners and providers that in the coming months you will be working to understand your total quantum risk exposure, which will include assessing their readiness for quantum threats as well.
5.5.1. Practical Steps for Preparing Your Third Parties
- Communicate Early and Clearly: Send formal communications to third-party vendors and partners, informing them about your quantum readiness initiatives. Organize webinars or meetings to discuss the necessity of transitioning to quantum-safe cryptography. Providing this early heads-up demonstrates transparency and proactive risk management.
- Send PQC Roadmap Questionnaires to Strategic Vendors: For your top 10–20 critical vendors, send structured questionnaires asking for: their PQC migration roadmap with specific GA dates; which NIST-standardized algorithms they plan to support (ML-KEM, ML-DSA, SLH-DSA); their hybrid deployment strategy; their FIPS 140-3 validation plans; and whether they can provide CBOM-compatible documentation of their cryptographic implementations. Start this in the first quarter — do not wait for CBOM completion.
- Incorporate Quantum Readiness in Contracts: Update contracts and SLAs to include clauses requiring third parties to adhere to quantum-safe practices, specify timelines and milestones for PQC transition, and commit to providing ongoing cryptographic transparency. Add PQC and crypto-agility requirements to all new RFPs and contract renewals immediately.
- Classify Vendors by PQC Criticality: Identify which vendors’ products sit on the critical path for your Tier-1 and Tier-2 system migration. Classify them as “Strategic Blocking” (on the critical path, no PQC support yet), “Strategic Enabling” (PQC support planned/available, needed for migration), or “Non-Critical” (migration timeline not constrained by this vendor).
- Monitor and Review Progress: Establish mechanisms for regular monitoring of vendor PQC readiness: monthly tracking for strategic blocking vendors, quarterly for others. Report vendor status to SteerCo as a standing agenda item.
Your proactive approach can motivate your suppliers to begin their own preparations, creating a ripple effect across your extended ecosystem. As more organizations within your network adopt quantum-safe practices, the overall resilience against CRQC threats is enhanced.
5.6. Perform Cryptographic Inventory
Performing a thorough cryptographic inventory remains your most important preparation step. Recommended by all major post-quantum security guidances and frameworks, this step is essential for understanding and mitigating quantum-related risks. However, it is often portrayed as a straightforward task, when in reality, it is a complex and lengthy exercise that requires extensive technical knowledge and coordination across the organization. Major organizations, based on my experience so far, should plan for this step alone to take one to two years of dedicated team effort for comprehensive coverage.
The situation is further complicated by cryptographic inventory tool vendors who may imply that their tools provide a complete solution. While these tools are invaluable in aiding the inventory process, they can never provide a 100% inventory on their own. A holistic approach combining tools, manual audits, and continuous monitoring is necessary to achieve a comprehensive cryptographic inventory.
Important: Rather than attempting to discover everything simultaneously — which is the most common cause of inventory paralysis — use a risk-driven scoping approach. Identify your Tier-1 systems using existing organizational knowledge (revenue data, business impact assessments, architecture diagrams) and focus initial discovery there. You do not need a cryptographic inventory to know which systems matter most. This approach, aligned with the Minimum Viable CBOM model described in Section 5.7, concentrates initial discovery where risk concentrates, delivering actionable inventory quickly.
5.6.1. Cryptographic Inventory Tools
There are a number of very useful tools on the market. Some of the most well-known ones include (in no particular order, and without endorsement): SandboxAQ AQtive Guard; IBM Quantum Safe Explorer; Infosec Global AgileSec Analytics; Keyfactor The Crypto-Agility Platform. The tooling landscape has matured significantly since 2024, but the fundamental principle remains: no single tool provides complete visibility.
When selecting automated tools, ensure they offer coverage across as many of these capabilities as possible:
Passive Network Traffic Monitoring: The tool should passively monitor network traffic to identify encrypted communications. This involves analyzing data packets to detect TLS/SSL, IPsec, SSH, and other secure protocols. This maps out where cryptographic functions are utilized across the network — corresponding to Layer 1 (Infrastructure Cryptography) in the Minimum Viable CBOM model.
Runtime Application Monitoring: The tool should monitor applications at runtime to detect calls to known cryptography APIs. This identifies dynamically loaded libraries and cryptographic operations not evident in static code analysis. It ensures that all cryptographic uses, including those in memory or during specific application states, are accounted for.
Filesystem Scanning: Comprehensive filesystem scanning to locate DLLs and other libraries known to contain cryptographic functions. This uncovers implementations that might not be directly visible through code or network analysis.
Source Code Analysis: Thorough review of all accessible source code to identify uses of cryptography, including scanning for known cryptographic libraries, functions, and custom implementations. Source code analysis provides detailed understanding of how cryptographic techniques are applied and can reveal hard-coded keys or deprecated algorithms.
Deep Binary Analysis: Analysis of compiled binary code for embedded cryptographic operations. This is particularly useful for proprietary software or third-party applications where source code is not available — corresponding to Layer 4 (Embedded/Third-Party Cryptography) in the Minimum Viable CBOM model.
Database Scanning: Capabilities to scan databases for cryptographic usage, such as encrypted columns or fields and cryptographic functions within stored procedures.
Memory Dump Analysis: Capability to analyze memory dumps to detect cryptographic keys, algorithms, and operations in use during runtime that may not be evident through static or dynamic code analysis.
While automated tools play a crucial role, they have significant limitations. They often fall short in uncovering non-standard or deeply embedded implementations, custom cryptographic algorithms, proprietary encryption methods, or cryptographic functions embedded within obscure code paths. Additionally, they may struggle with environments that have limited visibility, such as encrypted communication channels or protected storage areas.
5.6.2. Approach
To overcome these challenges, adopt a comprehensive approach combining automated tools with manual methods. Begin by deploying automated scanning tools to cover the broad spectrum of the IT environment. Supplement with manual code reviews and audits, particularly focusing on legacy systems, custom applications, and third-party software. Engage software developers, system architects, and third-party vendors to gain insights into less visible cryptographic uses.
Run three parallel discovery tracks, coordinated through integrated governance:
Track A — Cryptographic Discovery: Identifies where and how cryptographic functions are implemented. Uses the automated tools described above plus manual investigation.
Track B — Sensitive Data Discovery: Identifies, classifies, and catalogs all sensitive data. This is essential for prioritization — without knowing the sensitivity of data protected by a cryptographic implementation, you cannot prioritize effectively.
Track C — Systems and Assets Discovery: Catalogs and classifies all hardware and software assets, providing visibility into IT and OT infrastructure and helping prioritize migration based on system criticality.
These three tracks use different tools and methodologies but must be integrated — the intersection of cryptographic vulnerability, data sensitivity, and system criticality is what drives intelligent prioritization in the risk scoring phase.
5.6.3. Practical Steps for Performing Cryptographic Inventory
- Preparation and Planning: Define scope encompassing servers, databases, applications, network devices, IoT devices, cloud environments, and third-party systems. Establish goals: identifying cryptographic vulnerabilities, building the CBOM, planning for PQC migration, and ensuring compliance.
- Deploy Automated Discovery: Deploy automated tools to monitor networks, analyze filesystems, capture runtime calls to cryptographic functions, and scan source code. Configure tools for your highest-priority systems first (Tier-1 internet-facing systems and systems handling the most sensitive data).
- Supplement with Manual Investigation: Perform manual code reviews to uncover non-standard or deeply embedded cryptographic uses. Engage developers, system architects, and third-party vendors for insights on implementations not visible through automated tools.
- Utilize Dependency Analysis Tools: Trace and document dependencies between software components. Map which components rely on specific cryptographic functions and how they interact, providing a clearer picture of potential vulnerabilities and critical points.
- Integrate with Configuration Management Databases (CMDBs): Integrate cryptographic inventory data with existing CMDBs to maintain an up-to-date record and cross-reference with other enterprise data (business impact assessments, change management records, vendor relationships).
- Deploy a Cryptographic Management Platform: Consider deploying a platform to centralize management and monitoring of cryptographic keys, certificates, and algorithms. These platforms provide real-time visibility, automate routine tasks such as key rotation, and enforce policies.
- Establish Continuous Monitoring: Use automated monitoring to continuously track cryptographic implementations and detect changes. Set up alerting for deviations or new cryptographic instances. Discovery is not a phase that “completes” — it is a permanent operational capability.
- Conduct Regular Audits: Perform regular audits (quarterly recommended) to verify accuracy and ensure compliance. Maintain and update the inventory to reflect new applications, updates, and configurations.
5.7. Build a Cryptographic Bill of Materials (CBOM)
The Cryptographic Bill of Materials transforms raw inventory data into a structured, queryable, standardized record that serves as the single source of truth for all subsequent phases — risk scoring, pilot scoping, migration tracking, and audit evidence. Without it, your inventory data remains a collection of scan results and spreadsheets that must be re-discovered every time someone needs an answer.
Begin CBOM work concurrently with discovery — not after discovery is “complete.” Define the CBOM schema and tooling in the first weeks so that discovery data flows directly into a structured format from the start. This is critical for avoiding the most common CBOM mistake: treating it as something you build once discovery is done.
5.7.1. The Minimum Viable CBOM Model
The conventional approach to CBOM — attempting to catalog every cryptographic function call in every system before proceeding — is a completeness trap that delays migration indefinitely. The Minimum Viable CBOM model takes an architecture-first approach organized in four layers:
Layer 1 — Infrastructure Cryptography: TLS, SSH, and IPsec configurations on load balancers, reverse proxies, VPN concentrators, and network devices. Discoverable through network scanning and configuration review. Represents the largest HNDL attack surface and is the most amenable to hybrid deployment. Achieve comprehensive coverage here first — weeks to months.
Layer 2 — Platform Cryptography: Cryptographic services provided by platforms, frameworks, and middleware — cloud KMS, HSMs, certificate authorities, identity providers, service mesh mutual TLS. Discoverable through cloud API queries, HSM audit logs, and platform configuration review. Achieve comprehensive coverage alongside Layer 1.
Layer 3 — Application Cryptography: Cryptographic operations in application code — encryption of data at rest, digital signature generation/verification, token creation, custom protocol implementations. Requires code scanning and runtime analysis. Achieve targeted coverage for high-risk applications — months.
Layer 4 — Embedded/Third-Party Cryptography: Cryptographic implementations in vendor products, firmware, IoT devices, and OT systems where the organization has no source code access. Requires vendor documentation review, binary analysis, or acceptance of incomplete visibility. Accept documented incompleteness here and manage it through vendor governance (Section 5.16).
Use CycloneDX format — the de facto standard for CBOM, with native support for cryptographic asset types including algorithms, certificates, keys, protocols, and dependencies. Each CBOM entry should capture at minimum: component identifier, algorithm OID, key size, protocol context, implementation (library + version), certificate reference, data classification, quantum vulnerability status, migration status, owner, and vendor dependency flag.
5.7.2. Practical Steps for Building the CBOM
- Select Format and Tooling: Adopt CycloneDX. Decide where the CBOM will live — a dedicated tool, CMDB extension, version-controlled repository, or purpose-built database. Consider integration needs with CI/CD pipelines, SBOM tooling, and reporting systems.
- Populate from Inventory Data: Map Phase 1 discovery data into CBOM records. Import automated discovery results using tool-native exporters or CBOMkit. Enrich with manual investigation findings. Cross-reference with SBOM data to establish library dependency chains.
- Integrate into Operational Processes: The CBOM must be a live, queryable data set — not a static document. Integrate into CI/CD pipelines (new deployments auto-generate CBOM entries), change management (CAB reviews include CBOM impact assessment), vendor onboarding (new products must provide CBOM-compatible documentation), and audit/compliance (CBOM snapshots provide regulatory evidence).
- Establish Freshness Governance: Define update triggers: new system deployment → auto-generate entries; library version update → update implementation version; certificate renewal → update certificate reference; quarterly full scan → reconcile CBOM against latest discovery; vendor product update → request updated CBOM data; algorithm deprecation notice → flag all affected entries and trigger risk re-scoring.
- Link to SBOM: The CBOM gains significant value when linked to Software Bill of Materials data, because SBOM reveals the dependency chains through which cryptographic libraries propagate. A vulnerable algorithm in a widely-shared library affects every application that depends on it.
5.8. Assess Cryptographic Vulnerabilities
After completing the initial inventory and CBOM population, assess the vulnerabilities of these cryptographic systems. For many identified systems, the assessment will be straightforward — algorithms like RSA and ECC used for key exchange are well-known to be vulnerable to Shor’s algorithm. However, for some implementations, additional assessment approaches are required.
Cryptographic Health Check: Examine algorithms in use (flag those vulnerable to quantum attacks: RSA, ECC, DH for Shor; note that AES-256 is weakened by Grover but likely remains adequate). Assess key lengths. Evaluate protocol configurations for deprecated or insecure settings. Review key management practices including rotation and storage.
Static and Dynamic Code Analysis: Use both approaches to identify cryptographic functions and assess their security. Static analysis examines code without execution; dynamic analysis evaluates runtime behavior. This combination provides a thorough understanding of cryptographic uses and potential weaknesses.
Configuration Audits: Perform detailed audits of cryptographic configurations across all systems and applications. Ensure settings adhere to current best practices and identify misconfigurations.
Cryptographic Penetration Testing: Simulate attacks to uncover weaknesses not evident through static analysis: test for known vulnerabilities, assess proprietary or custom implementations, and conduct dynamic testing under attack conditions.
Store and Integrate Results: Assessment results should be stored in the CBOM itself — each CBOM entry’s quantum vulnerability status, migration status, and migration feasibility fields should be updated as assessments complete. This integration enables the risk scoring phase to operate directly from CBOM data.
5.9. Perform Sensitive Data Discovery and Classification
This process involves identifying, categorizing, and understanding the sensitivity of data within the organization. When combined with the CBOM, it enables more effective and prioritized planning for PQC migration.
Different types of data carry varying levels of sensitivity. Personally identifiable information (PII), financial records, intellectual property, health records, and national security information are examples of highly sensitive data requiring stringent protection. Beyond sensitivity, the value of data to potential attackers and its required confidentiality lifetime should be assessed — data that must remain confidential for 10+ years is at higher HNDL risk than data with short-lived value.
Integrating sensitive data classification with the CBOM is essential for contextualizing cryptographic vulnerabilities. Without knowing the sensitivity of the data protected by a cryptographic implementation, it’s challenging to prioritize vulnerabilities effectively. A highly vulnerable cryptographic module protecting only public data may warrant less urgent attention than a moderately vulnerable one protecting trade secrets with 30-year confidentiality requirements.
5.9.1. Practical Steps for Sensitive Data Discovery and Classification
- Define Data Sensitivity Criteria: Establish criteria for different levels of sensitivity (e.g., Public, Internal, Confidential, Restricted). Include a confidentiality lifetime dimension — how long must this data remain confidential? Data with multi-decade requirements scores highest for HNDL urgency.
- Conduct Data Discovery: Use automated data discovery tools to scan databases, file systems, cloud storage, and endpoints for sensitive data. Complement with manual reviews for data that automated tools might miss.
- Classify and Tag Data: Tag and label data according to sensitivity and classification criteria. Ensure all data, structured and unstructured, is appropriately classified. Maintain detailed records including the criteria used and data owners.
- Link to CBOM: Ensure that data classification results are linked to corresponding CBOM entries. Each CBOM entry should indicate the data sensitivity classification of the data it protects.
5.10. Perform Critical Systems and Assets Discovery and Classification
This process involves identifying, cataloging, and understanding the roles and criticality of all IT and OT assets. When combined with the CBOM, it enables prioritized migration planning.
Different assets play varying roles: servers hosting sensitive customer data are more critical than development servers used for testing. The classification should consider not just the asset’s current role but its position in the organization’s critical path — assets that many other systems depend on may warrant earlier migration even if they are not individually the most sensitive.
Combining systems and assets discovery with the CBOM helps contextualize where and how cryptographic methods are used within the infrastructure. This integration provides understanding of which cryptographic implementations are protecting the most critical systems and enables strategic planning for PQC migration.
5.10.1. Practical Steps
- Define Asset Classification Criteria: Develop criteria based on criticality, role, data handled, and number of dependent systems. Establish categories such as critical infrastructure, sensitive data processors, revenue-generating systems, and auxiliary systems.
- Conduct Asset Discovery: Use automated asset management tools to scan the network and identify all connected devices and software. Complement with manual checks. Cross-reference with existing CMDB, business impact assessment (BIA), and architecture documentation.
- Classify and Link: Tag assets according to classification criteria. Link to corresponding CBOM entries so that risk scoring can incorporate system criticality.
5.11. Keep Inventories Up to Date
Once comprehensive inventories are established, maintaining them is essential. Continuous updates ensure the organization remains aware of new vulnerabilities, changes in data sensitivity, and modifications in system configurations.
5.11.1. Practical Steps to Maintain Up-to-Date Inventories
- Continuous Monitoring: Implement tools configured to provide real-time updates and alerts for any changes to cryptographic implementations, data sensitivity, and system configurations.
- Regular Audits: Plan quarterly audits coordinated across all three inventory categories. Conduct audits after significant changes such as system upgrades, data migrations, or new software deployments.
- Integration with Change Management: Ensure that any changes to systems, data, or cryptographic implementations are documented as part of the change management process. Integrate inventory updates into the change management workflow — the CAB should review CBOM impact for every change.
- CI/CD Pipeline Integration: For organizations targeting continuous CBOM updates, integrate CBOM generation into the software delivery pipeline. New deployments should automatically generate or update CBOM entries. Block deployments that introduce quantum-vulnerable algorithms without documented justification and migration plan.
- Collaboration and Feedback: Facilitate regular meetings between IT, cybersecurity, compliance, and business units to discuss inventory updates. Establish feedback mechanisms to gather input from staff on accuracy and completeness.
5.12. Perform Risk Assessment and Prioritize for Remediation
With the CBOM populated, sensitive data classified, and systems and assets cataloged, conduct a comprehensive risk assessment — the Quantum Readiness Assessment (QRA). This process incorporates business context and dependencies, ensuring that quantum computing threats are evaluated within a broader organizational framework.
This is where the program transitions from “what do we have?” to “what do we do first?” — and where organizational politics often intrude. Every business unit believes its systems are either the most critical or the least affected. A structured, transparent scoring model depoliticizes this conversation by replacing opinion with defensible, repeatable criteria.
5.12.1. The Five-Dimension Risk Scoring Model
The Framework uses five core scoring dimensions:
Data Sensitivity: How sensitive is the data protected by this cryptographic implementation? Data with multi-decade confidentiality requirements (trade secrets, intelligence, long-term contracts) scores highest for HNDL risk. Include confidentiality lifetime as a sub-factor.
Cryptographic Vulnerability: How exposed is this specific algorithm and key size to quantum attack? RSA and ECC key exchange are Shor-vulnerable (highest score). AES-128 is weakened by Grover (moderate). AES-256 is weakened but likely remains adequate (lower). Already-PQC algorithms (lowest).
Exposure to Interception: Is this system internet-facing, in a semi-trusted zone, or fully isolated? Network-exposed TLS endpoints with vulnerable key exchange score higher than internal-only systems behind multiple network boundaries.
Migration Difficulty: Can this be migrated with a configuration change, or does it require a full system replacement? Self-controlled systems with modern crypto-agile architecture (low score). Vendor-dependent systems where the vendor has a published PQC roadmap (moderate). Vendor-dependent systems with no PQC roadmap, or legacy systems requiring complete replacement (highest score).
Regulatory and Compliance Urgency: Does a specific regulation or mandate set a deadline for this system? Systems in scope for CNSA 2.0, NCSC UK timelines, or PCI DSS 12.3.3 score higher.
For sector-specific environments, additional dimensions may apply (see Section 6).
5.12.2. Practical Steps for Risk Assessment and Prioritization
- Evaluate Cryptographic Vulnerabilities from CBOM Data: Use each CBOM entry’s algorithm, key size, and protocol context to assess quantum vulnerability.
- Integrate Data Sensitivity: Overlay data classification from Section 5.9 to determine the potential consequences if each system were compromised.
- Review System Criticality: Incorporate the asset classification from Section 5.10 to weight business impact.
- Map Network Exposure: Determine whether vulnerable systems are internet-facing, in DMZ, or fully internal.
- Assess Regulatory Urgency: Map each system to applicable regulatory timelines.
- Evaluate Existing Controls: Review current cybersecurity measures that may mitigate some identified risks and identify where enhancements are needed.
- Score and Rank: Apply the five-dimension scoring model to produce a prioritized migration backlog. Tier systems into priority groups: Tier-1 (migrate first — highest risk × most feasible), Tier-2 (migrate next), Tier-3 (migrate later), and Tier-4 (accept risk or defer).
- Validate with SteerCo: Present the QRA to the Steering Committee for review and approval. This is the point where contested priority assignments are resolved through the governance structure, not through stakeholder negotiation.
5.13. Develop Your Cryptographic Strategy and Multi-Year Roadmap
With the prioritized migration backlog in hand, develop a comprehensive cryptographic strategy and multi-year roadmap. This is a complex and multi-faceted undertaking that requires a thorough understanding of budget limitations, organizational risk appetite, the various remediation and risk reduction options, and critically, the external constraints that will shape the actual timeline.
A prioritized backlog tells you what needs to happen and in what order. The roadmap tells you when it will happen, who will do it, what it will cost, and what it depends on. The distinction matters because most PQC migration timelines are constrained not by internal execution capacity but by vendor readiness, hardware refresh cycles, regulatory deadlines, and the availability of scarce cryptographic engineering skills.
5.13.1. Understanding Risk Mitigation Options
Not every system can or should be migrated directly to PQC immediately. The cryptographic strategy should categorize systems into remediation buckets:
Direct PQC Upgrades: For systems where cryptographic modules are easily upgradeable — configuration changes to TLS cipher suites, library updates, certificate re-issuance. Plan and execute in coordinated waves.
Hybrid Cryptographic Deployment: Combining classical and PQC algorithms so that security depends on the strength of both. This is the responsible default for production deployments and is discussed in detail in Section 5.14.
Strengthening Surrounding Controls: Some systems can have their risk reduced by bolstering isolation mechanisms and improving surrounding cybersecurity measures. This includes network segmentation to limit HNDL interception scope, enhanced access controls, and improved monitoring. By improving the security posture of the environment around a vulnerable system, the overall risk is mitigated. This approach can be cost-effective and quicker than replacing cryptographic functions.
Tokenization: Replacing sensitive data with tokens that retain essential information without exposing actual data. This reduces the cryptographic attack surface — by tokenizing sensitive data, the scope of critical cryptographic systems is reduced. See “Evaluating Tokenization in the Context of Quantum Readiness.”
PQC-Aware Key Wrapping for Data at Rest: For archived data already encrypted with quantum-vulnerable key exchange, implement PQC-aware key-wrapping layers rather than re-encrypting entire data stores (which may be operationally infeasible for petabyte-scale archives). This protects the keys without requiring data re-encryption.
Vendor-Dependent Systems: For systems where migration depends on the vendor delivering PQC support, the strategy is engagement, leverage, and bridging patterns — discussed in Section 5.16.
Critical Legacy System Replacement: For systems where no other remediation is viable — often legacy systems with outdated hardware, absent vendor support, or deep process integration — the only option may be complete replacement. Given the scale and expense of such projects, planning must start as early as possible.
5.13.2. Practical Steps for Strategy Development and Roadmap
- Assess Budget and Risk Appetite: Understand financial constraints and organizational risk tolerance. This guides prioritization and ensures the strategy is realistic.
- Categorize Systems into Remediation Buckets: Based on the risk assessment and migration feasibility data in each CBOM entry, assign each system to one of the remediation categories above.
- Evaluate Interdependencies: Carefully consider technical, business, and financial interdependencies. Changes in one area can impact another. Map the dependency chains — a cryptographic library used by 50 applications creates a single migration task with 50-fold impact.
- Align to Refresh Cycles: The most cost-effective migration piggybacks on already-funded infrastructure changes. Map the roadmap against hardware refresh, cloud migration, and vendor contract renewal calendars.
- Define the 90-Day Quick Start and Year-1 Plan: See Section 7 below. The first 90 days should establish governance, begin discovery, launch pilots, and initiate vendor engagement. The Year-1 plan should be specified quarter-by-quarter with named owners and success criteria.
- Build the Multi-Year Roadmap: Develop a 5-year plan with annual milestones and critical path identification. Include explicit contingency triggers and pre-drafted acceleration/deceleration plans for when vendor timelines slip or regulatory deadlines change.
- Define Milestone Gates: Establish formal gate reviews for each phase transition with defined criteria and decision authority.
- Manage as a Living Instrument: The roadmap is not a static Gantt chart. Revise quarterly as pilot results reveal unexpected complexity, vendor timelines shift, and regulatory landscapes evolve. Establish standing SteerCo review of roadmap progress as a monthly agenda item.
5.14. Execute Pilots and Begin Migration
Start with hybrid cryptography — it is not optional. Hybrid cryptographic deployment combines classical algorithms with PQC algorithms so that security depends on the strength of both. This provides defense in depth: if a PQC algorithm is found vulnerable, the classical algorithm still protects the data, and vice versa. Hybrid deployment is now supported by major TLS libraries and is the approach recommended by most national guidance documents.
For TLS key exchange, hybrid ML-KEM-768 combined with X25519 is the default starting point for most enterprises. For digital signatures, ML-DSA-65 (for general signatures) and SLH-DSA-128s (for scenarios requiring conservative, hash-based security at the cost of larger signatures) should be evaluated. Note that SLH-DSA is not part of CNSA 2.0 — ML-DSA-87 is the designated general CNSA 2.0 signature algorithm, with LMS/XMSS permitted for specific applications.
5.14.1. Designing Effective Pilots
Every pilot should be designed to produce evidence for production scaling, not just prove that the technology works in a lab. Select pilot targets carefully — the Framework recommends starting with two pilots: one TLS and one VPN.
Define success criteria in advance: What performance thresholds must be met? What compatibility requirements exist? What rollback triggers are defined?
Measure comprehensively: Performance impact (latency, throughput, handshake time before and after); compatibility findings (which middleboxes, clients, or devices fail?); operational complexity (how much did deployment and troubleshooting actually cost?); and issues that would block production scaling.
Document reusable migration patterns: Each successful pilot should produce a documented, repeatable pattern — a “playbook” for TLS hybrid, VPN hybrid, mTLS hybrid, code signing hybrid — that can be applied to similar systems without repeating the pilot.
Plan for rollback: Every pilot must have a tested rollback procedure. PQC migration at scale cannot afford to have any individual deployment become a production incident.
5.14.2. Defense-in-Depth Measures
PQC is necessary but not sufficient. While PQC migration progresses, deploy complementary defenses:
- AES-256 as the default symmetric cipher everywhere. Grover’s algorithm halves AES key strength, making AES-128 the quantum equivalent of 64-bit security. AES-256 provides 128-bit quantum security — adequate for the foreseeable future.
- Aggressive network segmentation to limit HNDL interception scope. If adversaries cannot intercept the traffic, they cannot harvest it.
- Tokenization of sensitive data to reduce the cryptographic attack surface. See “Evaluating Tokenization in the Context of Quantum Readiness.”
- Shorter key lifetimes and automated rotation to reduce the window of vulnerability.
- Ephemeral key exchange (forward secrecy) wherever possible, ensuring that compromising one session key does not affect past or future sessions. TLS 1.3 makes ephemeral key exchange mandatory; ensure TLS 1.2 configurations use DHE or ECDHE cipher suites exclusively.
- PQC-aware key wrapping for archived data, protecting encryption keys without requiring full data re-encryption.
For more on these complementary measures, see “Mitigating Quantum Threats Beyond PQC.”
5.15. Tackle Infrastructure Modernization
PQC migration is not a drop-in algorithm swap. It requires infrastructure changes that many organizations are not prepared for. Pilot results from Section 5.14 are the primary input for infrastructure modernization scoping — each pilot reveals which middleboxes fail, which HSMs need PQC key support, which network paths cannot handle hybrid handshake sizes, and which PKI components require modernization.
Plan for larger cryptographic objects. ML-KEM public keys and ciphertexts are significantly larger than their classical equivalents. ML-DSA signatures are roughly 40 to 80 times larger than ECDSA signatures. These larger objects cause several infrastructure challenges:
- TLS handshake sizes increase dramatically. Hybrid TLS handshakes with ML-KEM + X25519 may require multiple TCP packets where classical handshakes fit in one. This triggers middlebox breakage (firewalls, intrusion detection systems, and load balancers that assume handshakes fit in a single packet), TCP fragmentation, and increased handshake latency.
- Certificate chain sizes grow substantially. PQC certificates with ML-DSA signatures are much larger than RSA or ECDSA certificates. Multi-level certificate chains compound this growth.
- HSM and KMS constraints. Many existing HSMs cannot handle PQC key sizes or algorithms without firmware updates or hardware replacement. FIPS 140-3 validated PQC modules are beginning to appear but are not yet universally available. HSM procurement lead times alone can add 6–12 months to the timeline.
Conduct performance testing in your actual operating environment — not vendor benchmarks. Test at production scale and under realistic load conditions. Identify bottlenecks and optimize configurations before broader rollout.
Plan infrastructure upgrades as roadmap deliverables. The infrastructure upgrade schedule — HSM replacement, PKI modernization, middlebox upgrades, network capacity expansion — should be explicit items in the multi-year roadmap with their own timelines, budgets, and dependencies.
5.16. Govern Your Vendors
For most organizations, vendor dependencies constrain the migration timeline more than any internal factor. The longest segment of most PQC migration critical paths is waiting for vendor products to ship PQC support. Organizations that start vendor engagement in Year 2 instead of Q1 Year 1 discover too late that their timeline is vendor-constrained and cannot be compressed with internal effort alone.
5.16.1. Vendor Classification and Assessment
Classify your vendors using a structured matrix:
Strategic Blocking vendors are those whose products are on the critical path for Tier-1 or Tier-2 system migration and whose PQC support timeline does not align with your needs. These require executive-level escalation and maximum contractual leverage.
Strategic Enabling vendors have PQC support planned or available and are needed for your migration. These require ongoing monitoring to ensure their timelines hold.
Non-Critical vendors are not on the critical path for near-term migration. These receive standard engagement and periodic assessment.
For each strategic vendor, determine: Do they have a published PQC roadmap? Have they committed to specific GA dates? Can they provide CBOM-compatible documentation? Is there a viable alternative if they fail to deliver? What is their FIPS 140-3 validation status and timeline?
5.16.2. Practical Steps for Vendor Governance
- Send PQC Roadmap Questionnaires in Q1 Year 1 to your top 10 strategic vendors. Do not wait for CBOM completion — the critical vendor list from your initial scoping assessment is sufficient to begin.
- Update Procurement Language Immediately: Add PQC and crypto-agility requirements to all new RFPs and contract renewals. Include clauses requiring vendors to deliver PQC support within defined timelines, provide cryptographic transparency, and commit to ongoing algorithm updates.
- Track Vendor Commitments Formally: Monthly tracking for strategic blocking vendors; quarterly for others. Report vendor status to SteerCo as a standing agenda item. Maintain a vendor PQC scorecard covering: percentage of strategic vendors with signed PQC commitments, number of products with GA hybrid/PQC capability, age of oldest unsupported critical product.
- Deploy Bridging Patterns for Vendor-Blocked Systems: When a vendor product doesn’t yet support PQC, deploy compensating measures: overlay encryption, network-level hybrid wrappers, gateway-based PQC termination. These reduce exposure while waiting for vendor support.
- Escalate Strategically: For strategic blocking vendors whose timelines don’t align, escalate through your executive sponsor to vendor C-suite leadership. Evaluate alternative vendors through proof-of-concept to create competitive leverage. Invoke contractual remedies where applicable.
- Don’t Forget Open-Source Dependencies: Open-source libraries require the same governance discipline as commercial vendor products. Track PQC support in your critical open-source cryptographic libraries (OpenSSL, BoringSSL, AWS-LC, liboqs) and plan for library upgrades.
5.17. Build Crypto-Agility as a Permanent Capability
The ultimate destination of PQC migration is not “we swapped RSA for ML-KEM.” It is crypto-agility — the organizational and technical capability to change cryptographic algorithms routinely, without the kind of multi-year crisis program that current architectures require.
Marin’s Law on Crypto-Agility: “The effort required to change cryptography in a system is inversely proportional to the agility built into that system from the start.” Formally: Y ≈ K / A, where Y is the migration effort, K is the complexity of the cryptographic estate, and A is the agility level. Systems designed for agility make algorithm changes through configuration updates and rolling deployments. Systems without agility require code rewrites, application rebuilds, and architectural redesigns.
PQC algorithms will eventually need replacement too — cryptographic algorithms have limited lifespans. SHA-1, MD5, DES, 3DES, SSL, and early TLS all fell to attacks or obsolescence. Organizations that hardcoded these algorithms faced expensive emergency migrations. The goal of PQC migration is not merely to swap one set of algorithms for another, but to build the organizational capability to change cryptographic algorithms routinely. For more on this perspective, see “Rethinking Crypto-Agility.”
5.17.1. Crypto-Agility Architecture Principles
In practice, crypto-agility means:
- No direct algorithm calls in application code. Applications use cryptographic providers or adapters that abstract algorithm selection. Examples: Java Cryptography Architecture (JCA) with configurable providers, OpenSSL with pluggable engines, cloud KMS APIs that abstract underlying algorithms.
- Algorithm selection is configuration-driven. A policy file, environment variable, or central configuration service determines which algorithms are used. Changing from X25519 to ML-KEM-768 is a configuration change, not a code change.
- Dual-stack capability during transition. Systems can negotiate both classical and PQC algorithms simultaneously (hybrid mode) and gracefully degrade if the counterparty doesn’t support PQC.
- Automated certificate and key lifecycle management. Certificate issuance, renewal, rotation, and revocation are fully automated. Manual certificate management at scale is incompatible with the agility requirement.
- Algorithm changes are tested in CI/CD. Cryptographic configuration changes follow the same deployment pipeline as code changes: tested in staging, canary-deployed, monitored, and rollback-capable.
- Continuous monitoring of cryptographic posture. Monitoring detects algorithm drift, expired configurations, and non-compliant deployments. Alerts fire when a system negotiates a deprecated algorithm.
5.18. Hybrid and Interim Strategies for Protecting Data Against Quantum Threats
While the NIST PQC standards are now finalized and hybrid deployment is the recommended production approach (Section 5.14), some organizations may not be ready to deploy PQC algorithms across all systems immediately. The migration will take years, and during this transition period, there are complementary strategies available to protect data against future quantum threats. These approaches should be considered as part of your comprehensive post-quantum strategy — not as alternatives to PQC migration, but as additional layers of defense during the transition.
5.18.1. Hybrid Cryptographic Schemes
Hybrid cryptographic schemes combine classical and quantum-resistant algorithms to provide defense in depth. By using both types of algorithms together, organizations ensure that even if one is compromised, the other remains secure. For instance, a hybrid scheme combines X25519 (classical) with ML-KEM-768 (quantum-resistant) for key exchange, so that the shared secret derives from both algorithms. Even if the PQC algorithm were found to have a vulnerability, the classical algorithm still protects the session — and even if a CRQC breaks the classical algorithm, the PQC algorithm provides quantum-resistant protection.
In practical terms, implementing a hybrid scheme involves setting up a key exchange protocol that derives a shared secret from both the classical and quantum-resistant algorithms. This shared secret is then used for subsequent symmetric encryption, ensuring that the data remains secure even if one of the key exchange methods is broken.
Hybrid cryptographic schemes are particularly suitable for: securing long-term data such as classified government information or long-term financial records; high-value transactions where the cost of compromise is extreme; and providing security during the extended transition period toward full PQC adoption.
While hybrid schemes offer enhanced security, they come with several challenges that must be planned for:
Increased Complexity: Combining classical and quantum-resistant algorithms adds complexity to cryptographic protocols, which can increase the risk of implementation errors. Rigorous testing and validation are essential.
Performance Overhead: The use of multiple cryptographic algorithms leads to increased computational overhead, impacting handshake times and throughput. The extent of this impact depends on the specific algorithms and the operating environment — performance testing in your actual infrastructure is critical (see Section 5.15).
Compatibility Issues: Ensuring compatibility between classical and PQC algorithms can be challenging, particularly when integrating with existing systems, middleboxes, and protocols that do not yet understand hybrid negotiation. Middlebox breakage — where firewalls, intrusion detection systems, or load balancers drop oversized hybrid handshakes — is one of the most commonly encountered issues in pilot deployments.
Key Management: Managing multiple cryptographic keys for different algorithms can complicate key management processes. Ensure your KMS and HSM infrastructure can handle the additional key material.
5.18.1.1. Practical Steps to Implement Hybrid Cryptographic Schemes
- Choose Appropriate Algorithms: Select both classical and quantum-resistant algorithms suited for your use case. For TLS key exchange, the default starting point is X25519 (classical) combined with ML-KEM-768 (quantum-resistant). For digital signatures, evaluate ML-DSA-65 paired with ECDSA-P256 or Ed25519, noting that hybrid signatures significantly increase certificate sizes.
- Design the Key Exchange Protocol: Develop a key exchange protocol incorporating both classical and quantum-resistant methods. In TLS, this is typically handled by the library — modern implementations of OpenSSL, BoringSSL, and AWS-LC support hybrid key exchange. The protocol generates two separate key shares (one classical, one quantum-resistant) and combines them to derive the session secret.
- Combine Keys Securely: Use a key derivation function (KDF) to combine the keys obtained from both key exchanges. The combined key benefits from the security properties of both inputs. In TLS 1.3, the hybrid key exchange is integrated into the standard key schedule — the implementation details are handled by the TLS library.
- Integrate into Existing Systems: Ensure the hybrid scheme can be integrated into your existing infrastructure. This may involve updating TLS libraries, modifying load balancer and reverse proxy configurations, and validating that middleboxes can handle the larger handshake messages.
- Performance Optimization: Optimize the performance of hybrid cryptographic operations. This could involve selecting more efficient parameter sets, enabling hardware acceleration where available, or adjusting TLS session resumption settings to reduce the frequency of full handshakes.
- Testing and Validation: Rigorously test the hybrid scheme in a production-mirror environment to ensure it meets security requirements and performs acceptably. Include both functional testing (correctness) and performance testing (latency, throughput, resource consumption under load). Test with real client populations and real middlebox configurations — not just lab conditions.
5.18.2. Retained Shared Secrets
Another strategy involves the use of retained shared secret data in the key derivation process, supplementing the key material obtained from public key operations. Retained shared secrets approaches, which leverage concepts from protocols like ZRTP (Zimmermann Real-time Transport Protocol), rely on pre-shared pieces of information that are known only to the communicating parties. Unlike keys derived from public key infrastructure (PKI), which are vulnerable to quantum attacks, retained shared secrets are not exchanged over potentially insecure channels and therefore remain secure even against quantum adversaries. By combining these secrets with the key material obtained through traditional public key operations, users can enhance the security of their cryptographic systems.
The key derivation process involves generating cryptographic keys from a combination of inputs. In this strategy, the derived key is constructed using both the retained shared secret and the key material from a public key operation (e.g., RSA or ECC). This dual-input approach ensures that even if the public key operation is compromised by a quantum computer, the retained shared secret provides an additional layer of security, making it significantly more difficult for an attacker to derive the full cryptographic key.
ZRTP is an example of a Retained Shared Secret approach. It’s a cryptographic key-agreement protocol mostly used in VoIP (Voice over IP) communications, designed by Phil Zimmermann. It establishes a secure communication channel by combining Diffie-Hellman key exchange with retained shared secrets. The protocol is independent of the underlying transport layer and works by exchanging hashed values of the shared secret over the communication channel. Key steps include:
- Initial Key Exchange: During the initial setup, ZRTP performs a Diffie-Hellman key exchange to establish a session key.
- Hash Comparison: The parties compare short authentication strings (SAS) derived from the session key to verify the integrity of the exchange.
- Retention of Shared Secret: The protocol retains shared secrets between sessions, which are used to strengthen the key exchange in subsequent communications.
Security benefits of this approach include forward secrecy (session keys are ephemeral and not stored long-term), resistance to man-in-the-middle attacks (SAS comparison detects tampering), and enhanced security with retained secrets (subsequent sessions become more resilient to potential attacks, including from quantum computers).
5.18.2.1. Retained Shared Secrets: Viable Complementary Measure?
The approach provides immediate enhancement of security without requiring a complete overhaul of the existing cryptographic infrastructure. Organizations can integrate shared secrets into their current key derivation processes, providing an additional boost alongside their PQC migration program.
However, implementing this strategy comes with specific practical considerations. The major constraint is the need to maintain and securely store pairwise shared secret data. Each pair of communicating entities must have a unique shared secret retained over time. This imposes a stateful architecture, where systems must remember and manage previous interactions. Consequently, this approach is best suited for environments with a limited number of peers, such as a closed network of trusted devices or partners, rather than open, large-scale systems with numerous and dynamic connections.
In real-world applications, this strategy is particularly useful for systems that already maintain state and have relatively stable, limited sets of peers. Internal communications between critical systems can employ retained shared secrets to bolster security. Similarly, secure channels between long-term business partners or between a company and its remote offices can benefit from this approach.
5.18.2.2. Practical Steps to Implement Retained Shared Secrets
- Establish Secure Initial Communication: Use a secure initial communication channel to exchange the shared secrets — in person, over a secure phone call, or using a pre-existing secure channel. Generate strong shared secrets using a high-entropy random number generator (or a QRNG) to ensure they are difficult to guess or brute-force.
- Store Shared Secrets Securely: Store the shared secrets in a secure, encrypted form using a hardware security module (HSM), secure enclave, or well-protected software solution. Ensure that only authorized entities have access and implement strict access controls and audit logging.
- Integrate Shared Secrets into Key Derivation: Use a robust key derivation function (e.g., HKDF — HMAC-based Extract-and-Expand Key Derivation Function) that can combine the shared secret with the key material from public key operations. Ensure the KDF is resistant to known cryptographic attacks and follows best practices.
- Implement Stateful Systems: Design your systems to maintain state, ensuring shared secrets are persistently stored and managed. Implement mechanisms to manage the lifecycle of shared secrets, including rotation, expiration, and revocation procedures.
- Limit the Set of Peers: Restrict the use of retained shared secrets to a limited set of trusted peers to reduce complexity and enhance security. Establish policies for managing the peer group, including adding and removing peers securely.
- Regularly Rotate Shared Secrets: Implement regular rotation to minimize the risk of long-term exposure. Automate the rotation process where possible. Ensure new secrets are securely distributed and stored without service interruption.
- Monitor and Audit: Continuously monitor the usage of shared secrets and key derivation processes to detect anomalies or unauthorized access. Conduct regular audits to ensure compliance with security policies.
- Educate and Train Staff: Provide training on secure key management and the specific procedures for handling retained shared secrets. Ensure all relevant personnel understand the security implications and operational requirements.
5.18.3. Ephemeral Key Exchange
Ephemeral key exchange refers to a cryptographic process where temporary keys are generated for each session or transaction. These keys are used only for the duration of the session and then discarded, providing enhanced security by ensuring that even if keys are compromised, they cannot be used to decrypt past communications. This technique is particularly important for ensuring forward secrecy, which means that compromising one key does not affect the security of past sessions.
In protocols like TLS, ephemeral Diffie-Hellman (DHE) or Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) key exchanges provide this forward secrecy. TLS 1.3 makes ephemeral key exchange mandatory — all TLS 1.3 handshakes use ephemeral key exchange by design. However, many organizations still have TLS 1.2 configurations that use static key exchange (RSA key transport), which provides no forward secrecy and is particularly vulnerable to HNDL attacks: an adversary who captures the encrypted traffic and later obtains the server’s private key (whether through quantum computing or other means) can decrypt all historical sessions.
While ephemeral key exchange does not provide quantum resistance — the underlying Diffie-Hellman or ECDH algorithms remain vulnerable to Shor’s algorithm — it significantly reduces the HNDL exposure window compared to static key exchange. With ephemeral key exchange, an attacker would need the CRQC capability at the time of session establishment, not retrospectively. This makes ephemeral key exchange a valuable complementary measure while PQC migration progresses.
5.18.3.1. Practical Steps to Implement Ephemeral Key Exchange
- Audit Current TLS Configurations: Identify all TLS endpoints still using static RSA key exchange (cipher suites like TLS_RSA_WITH_AES_128_GCM_SHA256). These are the highest-priority targets for upgrade because they offer no forward secrecy and are most vulnerable to HNDL.
- Configure TLS for Ephemeral Key Exchange: Ensure TLS configurations use cipher suites that support DHE or ECDHE. For TLS 1.2, prioritize cipher suites like TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 and TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384. For TLS 1.3, ephemeral key exchange is mandatory — focus on ensuring all endpoints support TLS 1.3 where possible.
- Configure SSH for Ephemeral Key Exchange: Configure SSH to use ephemeral key exchange methods such as diffie-hellman-group-exchange-sha256 or curve25519-sha256. Update the SSH configuration file to include these methods and remove legacy static key exchange options.
- Configure IPsec/VPN for Ephemeral Key Exchange: Configure IKE settings to use ephemeral Diffie-Hellman groups. Ensure that IPsec policies include modern DH groups (modp2048, modp3072, or curve25519) and do not fall back to static key exchange.
- Discard Ephemeral Keys After Use: Ensure that no ephemeral key material is retained in memory or storage after session completion. Verify this through security audits and penetration testing.
- Key Management Practices: Regularly update key management policies to ensure ephemeral keys are handled securely. Conduct periodic audits to verify compliance.
- Monitor and Maintain: Monitor the implementation of ephemeral key exchanges to detect anomalies. Keep cryptographic libraries and protocols updated with the latest security patches.
5.18.4. Split Key Encryption
Split key encryption, also known as secret sharing, involves dividing a cryptographic key into multiple parts and distributing these parts among different parties. Each part is useless on its own and requires a certain number of parts to be recombined to reconstruct the original key. This approach ensures that no single entity holds the complete key, making it significantly harder for an attacker — including one with quantum computing capability — to compromise the encryption.
An example is Shamir’s Secret Sharing scheme, where a key is split into several pieces, and only a subset (a “threshold”) of these pieces is needed to reconstruct the key. This method can be used in conjunction with other cryptographic techniques to enhance security and protect against quantum threats by distributing the risk: even if a quantum computer could break one component of the key management system, the split key architecture means no single compromise provides access to the full key.
Like the other approaches described here, split key encryption has its challenges. The process of splitting and reconstructing keys adds complexity to the cryptographic system. Securely managing and distributing the key shares requires robust key management practices. And the additional computational steps involved in splitting and reconstructing keys can impact performance.
5.18.4.1. Practical Steps to Implement Split Key Encryption
- Choose a Secret Sharing Scheme: Shamir’s Secret Sharing is the most widely used scheme. It splits the secret into n parts such that any k parts (the threshold) can reconstruct the secret. Blakley’s Scheme is another option that uses geometric intersections to share secrets.
- Define Parameters: Set the threshold k (minimum number of shares required to reconstruct the secret) and the total shares n (total number of shares to generate). Choose parameters based on your security requirements and operational constraints — for example, a 3-of-5 scheme provides redundancy (any 3 of 5 keyholders can reconstruct the key) while limiting the impact of individual share compromise.
- Generate Key Shares: Using Shamir’s Secret Sharing: choose a prime number p larger than the secret, construct a polynomial of degree k-1 where the constant term is the secret, and evaluate the polynomial at n different non-zero points to generate the shares.
- Distribute Shares Securely: Use secure channels (encrypted transfer, physically transported USB drives in secure pouches) to distribute shares to different parties. For highly sensitive keys, distribute shares across geographically separated secure locations.
- Store Shares Securely: Encrypt each share before storage to provide an additional layer of security. Implement strict access controls to ensure only authorized entities can access the shares. Consider storing shares in HSMs or secure enclaves.
- Reconstruct the Key When Needed: To reconstruct the key, gather the required k shares. Apply Lagrange interpolation to reconstruct the polynomial and retrieve the secret. Implement this reconstruction in a secure execution environment.
- Regular Rotation and Management: Periodically regenerate and redistribute key shares to minimize the risk of long-term exposure. Regularly audit the management and access of key shares to ensure compliance with security policies.
It is important to note that these interim and complementary strategies are not substitutes for full PQC migration. They are additional layers of defense that reduce quantum risk exposure during the multi-year transition period. The ultimate goal remains migrating to NIST-standardized PQC algorithms and building crypto-agility as a permanent organizational capability.
6. Sector-Specific Considerations
The core methodology above applies universally, but specific sectors face additional constraints that shape how the steps are executed. The Applied Quantum PQC Migration Framework includes dedicated sector extension documents for each of these verticals, providing detailed phase-by-phase adaptations.
6.1. Financial Services
Financial services face the densest regulatory overlay of any sector. PCI DSS v4.0 Requirement 12.3.3 already mandates documentation and annual review of cryptographic cipher suites and protocols in use. DORA (Digital Operational Resilience Act) imposes supply chain security requirements that directly intersect with PQC vendor governance. National regulators — OCC and Federal Reserve in the US, ECB in Europe, MAS in Singapore, HKMA in Hong Kong — are all developing or have published quantum readiness expectations.
Payment processing systems present some of the most intricate CBOM challenges. The cryptographic stack in modern interbank payment systems spans HSM-protected PIN blocks, session keys for inter-institution communication, message authentication codes for settlement messages, and TLS connections between processing nodes. The cryptographic iceberg inside a mobile banking transaction is even more complex, with cryptographic operations spanning the mobile device, the bank’s API gateway, multiple backend services, card network interfaces, and inter-bank settlement systems.
The TNFL threat deserves particular attention in financial services. Digital signatures on transactions, contracts, and regulatory filings could be forged or repudiated once CRQCs are available. Non-repudiation — the assurance that a party cannot deny having signed a document — is a foundational requirement for financial markets. The integrity-side quantum threat to non-repudiation may prove even more disruptive than the confidentiality-side threat to encrypted data.
Financial institutions should add a “regulatory density” dimension to the risk scoring model, reflecting the number and stringency of overlapping regulatory requirements applicable to each system. Systems in scope for multiple regulations (PCI DSS + DORA + national banking regulation) should score higher for migration urgency.
6.2. Telecommunications
Telecommunications operators face unique challenges that make PQC migration simultaneously critical and constrained. 3GPP and O-RAN Alliance specification dependencies determine which cryptographic changes the operator can make unilaterally and which require specification changes through the standards body — a process that can take years.
The cryptographic stack in a modern 5G call involves multiple protocol layers (radio, transport, signaling, application), multiple trust domains (operator, vendor, partner operator, standards body), and massive scale. A single mobile operator may have millions of SIM/eSIM credentials, thousands of base stations with IPsec tunnels, hundreds of core network elements with mutual TLS, and dozens of roaming and interconnect interfaces with partner operators.
Roaming and interconnect interfaces present a particular challenge: migrating these interfaces to PQC requires bilateral or multilateral coordination with other operators. An operator cannot unilaterally change the cryptography on a roaming interface — both sides must agree and coordinate the change. This multi-operator coordination requirement significantly extends migration timelines for these interfaces.
The concentration of infrastructure vendors (Ericsson, Nokia, Huawei) means that vendor governance is particularly critical for telecoms. A small number of vendors control cryptographic implementations across the network core, and their PQC roadmaps constrain the operator’s migration timeline.
Telecommunications organizations should add “standards dependency” (does migration require 3GPP specification changes?) and “multi-operator coordination” (does migration require bilateral agreement?) to their risk scoring model. Systems that are both standards-dependent and multi-operator-coordinated — such as roaming security — score highest for migration difficulty.
6.3. Critical Infrastructure and OT
Critical infrastructure operators with significant OT environments face a fundamentally different set of constraints that make PQC migration slower, riskier, and more dependent on compensating controls than in IT-centric enterprises. The defining characteristic of OT environments is that cryptographic failure can cause physical harm.
OT systems (SCADA, PLCs, RTUs) often have 15–25 year lifecycles with no update mechanism. Safety constraints require any cryptographic change to go through safety case re-certification (IEC 61508, IEC 61511, ISA 84). Active scanning of OT networks can cause operational disruptions — discovery approaches that are routine in IT environments may be dangerous in OT.
The TNFL threat is particularly urgent for OT: a forged firmware signature on a safety instrumented system (SIS) could disable safety interlocks. A compromised VPN key on a SCADA link could enable unauthorized control commands to a dam, gas pipeline, or power grid substation. The board message for critical infrastructure organizations should lead with TNFL, not HNDL: “The signatures protecting our safety systems will be forgeable by quantum computers. The devices most at risk are the ones we cannot easily update.”
Gateway-based PQC termination at the OT/IT boundary is typically the pragmatic starting point — deploying PQC-capable gateways that terminate PQC connections from the IT side and translate to legacy cryptography for OT devices. Long-term device refresh plans address endpoints that cannot be upgraded.
OT engineering and process safety must have co-equal status with IT security on the SteerCo. Include the Chief Engineer or Director of Operations, not just the CISO.
6.4. Government and Defense
Government and defense is the one sector where PQC migration is not optional or aspirational — it is mandated with hard deadlines. CNSA 2.0 mandates that new national security system acquisitions must be CNSA 2.0 compliant by 2027; software and firmware signing must use PQC by 2030; networking equipment (VPNs, routers) must exclusively use CNSA 2.0 by 2030; web/cloud/OS platforms by 2033; and all NSS including custom and legacy systems must be fully migrated by 2035.
In the United States, NSM-10 directed federal agencies to inventory cryptographic systems and develop migration plans. OMB M-23-02 required agencies to submit inventories of systems vulnerable to quantum computers. Similar mandates exist in the UK, EU, Australia, Canada, and other Five Eyes and NATO nations.
The challenge for government and defense is not securing the mandate — it exists by policy. The challenge is translating policy mandates into funded, resourced, staffed programs with realistic timelines, and managing the additional complexity of working across classified and unclassified domains. CBOM data for classified systems is itself sensitive — potentially classified — because it reveals cryptographic architecture. FIPS 140-3 validation requirements constrain which PQC implementations can be deployed in federal systems.
Note that SLH-DSA is not part of CNSA 2.0 — ML-DSA-87 is the designated general CNSA 2.0 signature algorithm, with LMS/XMSS permitted for specific applications such as firmware signing. Organizations targeting CNSA 2.0 compliance should plan their algorithm selection accordingly.
7. The First 90 Days
For cybersecurity leaders reading this who want to know precisely where to begin, here is a concrete 90-day quick start based on the Framework’s Activity 4.1:
Week 1–2: Develop a board-ready briefing that frames PQC migration around the four urgency drivers. Identify your executive sponsor. Secure agreement to fund a scoping assessment.
Week 3–4: Conduct the initial scoping assessment: identify your top 20 critical systems, estimate the size of your cryptographic estate (number of TLS endpoints, certificates, VPN tunnels, HSM-protected keys, code-signing pipelines), and identify your top 10 vendor dependencies.
Week 5–8: Use the scoping assessment to build the full business case and budget request. Draft the program charter. Design the governance structure (SteerCo composition, QRPM role, workstream model). Begin assembling the cross-functional team. Enroll a 10–20 person training cohort in PQC fundamentals.
Week 9–10: Present the business case and secure multi-year budget commitment. Appoint the QRPM. Convene the first SteerCo meeting. Publish the charter and RACI. Designate “crypto champions” per platform team.
Week 11–12: Deploy automated discovery tools on Priority A systems (internet-facing, Tier-1). Begin CBOM structure design. Send PQC roadmap questionnaires to your top 10 strategic vendors. Start cryptographic policy revision (approved cipher suites include PQC hybrids; shorter key lifetimes for new certificates; PQC and crypto-agility clauses added to all new RFPs).
Week 13 (Day 90): Deliver the first SteerCo progress report with initial discovery findings. Present KPI baselines and Q+1 targets. Confirm two hybrid pilot targets (TLS + VPN), set up lab/staging environments, and define pilot success criteria. Publish the Year 1 quarterly plan and board reporting template.
By end of Day 90 you should have: Governance fully operational; training underway; Crypto-BOM v1 with ≥70% Tier-1 coverage in progress; two hybrid pilots selected; vendor questionnaires sent; policy updated; KPI baselines set.
This is not a gentle ramp-up. It is an aggressive but achievable pace that establishes the program on a footing from which it can sustain multi-year execution.
8. Conclusion
PQC migration is the largest cryptographic overhaul most organizations will ever undertake. It is not a project with a completion date — it is a permanent operational capability that your organization needs to build and sustain. The standards are finalized. The regulatory deadlines are set. The threats — HNDL and TNFL — are already active.
The good news is that the path forward is now clearer than it has ever been. We have standardized algorithms (ML-KEM, ML-DSA, SLH-DSA), proven hybrid deployment patterns, a structured methodology that has been tested in real-world programs at scale, and a regulatory environment that provides both urgency and air cover for the budget requests you need to make.
The journey towards quantum resistance is not merely about staying ahead of a theoretical threat but about evolving our cybersecurity practices in line with technological advancements. The organizations that will navigate this transition successfully are those that start now, frame it as a multi-year program, secure governance and funding that will outlast any single budget cycle, and treat crypto-agility — not just PQC deployment — as the destination.
Start while it’s a project, before it’s a crisis.
For the full structured methodology behind these practical steps, see the Applied Quantum PQC Migration Framework — a free, open (CC BY 4.0), 8-phase lifecycle methodology with cross-cutting concerns, sector extensions for financial services, telecommunications, OT/CNI, and government & defense. For deeper analysis on any of the topics covered here, explore the full library of practitioner-grounded articles at PostQuantum.com.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.
