Post-Quantum

Quantum Readiness for Mission-Critical Communications (MCC)

Introduction to MCC and Quantum Threats

Mission-critical communications (MCC) networks are the specialized communication systems used by “blue light” emergency and disaster response services (police, fire, EMS), military units, utilities, and other critical operators to relay vital information when lives or infrastructure are at stake. These networks prioritize reliability, availability, and resilience – they must remain operational even during disasters or infrastructure outages. For example, in a hurricane that knocks out commercial cell towers and power, robust MCC networks are expected to “rise above” the chaos and keep first responders connected. Communications security is equally paramount: in crisis scenarios, sensitive information (tactical plans, personal data, etc.) must be protected from interception or tampering, even as the network withstands physical disruptions. This dual demand for high resilience and strong security defines MCC networks’ unique requirements. I was involved in several MCC projects in my career. Building MCC networks are massive infrastructure projects often costing in tens of billions of dollars and with life expectancy measured in decades. So, of course, they need to worry about the quantum threat.

Quantum computers exploit phenomena like superposition and entanglement to solve certain mathematical problems exponentially faster than classical machines. Of particular concern are Shor’s algorithm and Grover’s algorithm, two quantum algorithms that directly undermine current cryptographic foundations. Shor’s algorithm (discovered in 1994) can efficiently factor large integers and compute discrete logarithms, meaning a sufficiently large quantum computer could break RSA encryption and Diffie–Hellman/ECC key exchange in polynomial time. In effect, widely used public-key schemes (RSA, elliptic-curve cryptography) would be rendered insecure by a quantum attacker, as the one-way mathematical problems they rely on become tractable. Meanwhile, Grover’s algorithm offers a quadratic speedup for brute-force search. Grover’s is less devastating than Shor’s (it does not outright break symmetric ciphers or hashes), but it halves the effective security of symmetric keys and hash functions. For instance, a brute-force attack on AES-128 with a quantum computer would succeed in roughly $2^{64}$ steps instead of $2^{128}$, making it comparable to a 64-bit key – a security level long considered inadequate. The standard mitigation is to double key sizes (e.g. use AES-256, SHA-512) to regain sufficient security margin against Grover’s algorithm. In summary, a future cryptographically relevant quantum computer (CRQC) would be capable of shredding most current cryptographic protections: breaking digital signatures, decrypting confidential communications, and forging credentials that MCC networks rely on for trust.

The timeline for quantum threats is uncertain, but the risk is not purely theoretical. Security agencies warn of a “harvest now, decrypt later” threat in which adversaries today intercept and store encrypted data (for example, by fiber tapping or signal interception), anticipating that in a decade or less a quantum computer will decrypt it. Any sensitive communication with a long confidentiality requirement – such as government/military secrets, personal medical data, or critical infrastructure control traffic – is vulnerable to this delayed breach. Indeed, U.S. authorities (CISA, NSA, NIST) urge organizations to begin quantum-proofing now, emphasizing that even before Q-Day (the day a quantum break becomes reality) we must act to protect data with long secrecy lifetimes. In practical terms, MCC operators should assume that without intervention, the cryptography underpinning their networks will be obsolete in the face of quantum adversaries.

For a detailed, step-by-step approach to achieving quantum readiness, see my article Ready for Quantum: Practical Steps for Cybersecurity Teams. But MCC networks pose some additional challenges even over the most complex enteprise networks.

Cryptographic Inventory Challenges in MCC Networks

MCC networks, whether land-mobile radio systems, public-safety LTE, or dedicated military communication links, make extensive use of standard cryptographic protocols. These include public-key algorithms like RSA (Rivest–Shamir–Adleman) and ECC (elliptic curve cryptography) for key exchanges and digital signatures, symmetric ciphers like AES (Advanced Encryption Standard) for voice/data encryption, secure transport protocols such as TLS and IPsec for network links, and PKI-based authentication systems that manage certificates and trust. For example, modern public safety radios implementing the APCO P25 standard use 256-bit AES encryption for over-the-air voice privacy and a centralized PKI to authenticate devices, while LTE-based mission-critical push-to-talk systems rely on IPsec tunnels and TLS sessions for end-to-end security. These very algorithms and protocols – RSA/ECC for key establishment, AES-128/256 for confidentiality, SHA-1/SHA-2 for integrity, etc. – are exactly what quantum computing targets. As discussed, RSA/ECC will need to be replaced or augmented, and AES keys/hash functions strengthened, to withstand quantum attacks. The first step toward quantum-safe communications is therefore knowing where and how these vulnerable cryptographic schemes are used across an MCC environment. This is easier said than done.

Discovering and cataloging all cryptographic assets in a complex MCC network is a major challenge. Cryptography is often deeply embedded and “invisible” in system components – integrated into radio firmware, networking equipment, mobile devices, and applications in ways that operators might not explicitly manage day-to-day. A mission-critical network might involve thousands of endpoints (handheld units, dispatch consoles, routers, satellites, etc.) from multiple vendors, deployed over decades. Each may have its own cryptographic functions: a device might use an on-board VPN client, a secure bootloader with digital signatures, an authentication handshake with the core network, and so on. As one analysis notes, the “invisible nature of cryptography” – designed to work quietly in the background – means many encryption or signing operations occur without administrators’ awareness, especially when third-party libraries or firmware handle them automatically. This lack of visibility can lead organizations to be unaware of the full breadth of public-key cryptography usage in their environment. In the context of MCC, there may be hidden dependencies: for instance, a dispatch software might include an outdated TLS library for its mapping module, or a trunked radio repeater might use an embedded default RSA key for control channel encryption. Identifying these instances is akin to peeling an onion with many layers of software and hardware.

Compounding the issue, traditional asset inventory tools don’t easily enumerate cryptographic details. A scanner might detect that a server supports TLS, but not identify that it’s using, say, a 2048-bit RSA certificate set to expire in 2030. Specialized cryptographic discovery tools are emerging to map out algorithms and key lengths in use. These can scan configurations, binaries, and network traffic to flag quantum-vulnerable algorithms. However, automated tools have limitations – they may not see inside proprietary firmware or closed appliances. Embedded cryptography (e.g. in a radio’s firmware or SIM card) might not be documented publicly, and vendors may not divulge details readily. CISA and NSA note that organizations should actively engage vendors for lists of embedded cryptography in their products, since some algorithms are baked into hardware where scanners can’t reach.

Another challenge is assessing the risk and priority of each cryptographic instance once found. Not all crypto is equal – breaking a site-to-site VPN protecting dispatch center communications is far more damaging than breaking an encrypted database field of secondary importance. MCC operators must evaluate which systems carry the most sensitive data or perform the most critical functions, and how long that data needs to remain secure. For example, an emergency incident archive might need confidentiality for decades (to protect privacy or national security), whereas routine patrol logs might lose sensitivity after a few weeks. This analysis informs which cryptographic usages are most urgent to fix in light of quantum threats. In practice, such risk-based prioritization requires correlating the cryptographic inventory with mission impact. Leading agencies recommend mapping crypto use to the data and processes it protects. If, say, a certain VPN tunnel is found to use a vulnerable cipher and carries inter-agency incident traffic, it should be tagged “high priority” for PQC upgrade.

Finally, MCC networks face complexity due to dependencies on core systems and interoperability requirements. A public safety communication system often spans multiple organizations and jurisdictions – interoperability is a key tenet, enabling different agencies to communicate securely during joint operations. This means the cryptographic choices in one part of the network can’t be changed in isolation. A legacy algorithm might linger because it’s the only one universally supported by all agencies’ equipment. A real-world example is the now-deprecated DES encryption in older radios: many agencies continued supporting 56-bit DES long after it was known to be weak, simply to maintain compatibility, until a coordinated migration to AES could be achieved. Similarly, an MCC operator might discover that their core authentication server still uses an RSA-1024 certificate because neighboring systems haven’t been upgraded to accept anything stronger. Changing cryptography in such an environment is a delicate dance – one must inventory not just one’s own assets, but also understand cryptographic touchpoints with external networks and partners. All these factors make cryptographic inventory and risk assessment a non-trivial prerequisite for quantum readiness. It requires time, tooling, vendor support, and cross-organizational cooperation. Yet, it is an essential foundation: as the saying goes, “you can’t protect what you don’t know you have.” Comprehensive knowledge of current crypto usage is the only way to effectively plan a migration to quantum-safe alternatives.

Post-Quantum Cryptography (PQC) and MCC Upgrades

The good news is that the cryptographic community has not been idle in the face of the quantum threat. A massive research effort over the past decade has led to post-quantum cryptography (PQC): new algorithms for encryption, key exchange, and digital signatures designed to be secure against quantum attacks (while also remaining secure against classical attacks). PQC is implemented in software and hardware like traditional crypto; it does not require exotic physics – in contrast to quantum key distribution (QKD) – and thus can be deployed in conventional networks with software updates or new chips. After a six-year public competition, NIST announced in 2022 the first set of quantum-resistant algorithms selected for standardization. Four algorithms were chosen (three have been finalized as standards in 2024, with the fourth in draft): CRYSTALS-Kyber (a lattice-based key encapsulation mechanism for encryption/key establishment) and three digital signature schemes – CRYSTALS-Dilithium (lattice-based), FALCON (lattice-based), and SPHINCS+ (hash-based). Kyber will serve as a replacement for RSA/ECDH key exchanges (e.g. in TLS handshakes or IPsec setups), and Dilithium/Falcon/SPHINCS+ will replace RSA/ECDSA for signing and authentication (e.g. in certificates, code signing, login protocols). These algorithms rely on hard mathematical problems (lattices or hash functions) that neither classical nor known quantum algorithms can solve efficiently. As of 2024, draft FIPS standards have been published for Kyber (designated FIPS 203) and Dilithium/Sphincs+ (FIPS 204/205), with Falcon to follow. This NIST effort is closely watched worldwide; organizations are beginning to prototype these PQC algorithms in anticipation of formal standards.

However, upgrading an MCC network to use PQC is not as simple as dropping in new algorithms. Significant performance and integration considerations must be addressed, given the constraints of mission-critical environments. PQC schemes generally have different performance profiles compared to RSA or ECC. In many cases, keys and signatures are larger, and computations can be more intensive, which could impact bandwidth, latency, and device resource usage. For example, lattice-based algorithms like Kyber and Dilithium achieve high security and speed, but their public keys and ciphertexts/signatures are on the order of kilobytes rather than bytes. One study notes that Kyber’s key material is around 1.5 KB in size – a drastic increase from the 256-byte (2048-bit) RSA keys common today – yet still “manageable” in most applications due to its strong efficiency trade-off. Indeed, Kyber encryption is very fast and its 1–2 KB transmissions are usually acceptable even for real-time communications. But other PQC options can be heavier; for instance, a Dilithium digital signature (at NIST security level 2 or 3) is about 2–3 KB in size, compared to 64 bytes for an ECDSA signature. In certain MCC contexts that transmit small, time-sensitive messages, this can pose a problem. A recent evaluation in the context of vehicular V2V communications (IEEE 802.11p) found that a Dilithium signature actually exceeded the entire message payload size (which was 2304 bytes in that system) and introduced too much overhead for the fast beacon messages cars exchange. The authors concluded that a more compact PQC signature (Falcon) was necessary for that use-case. This illustrates how bandwidth overhead of PQC could constrain some mission-critical applications – whether it’s safety messages, sensor readings, or voice frames, MCC networks often operate under strict latency and bandwidth budgets.

Computation and hardware requirements are another consideration. Many MCC devices are embedded systems or battery-powered units that don’t have high-end CPUs. PQC algorithms, especially those based on complex math structures, may require more CPU cycles or memory. As a case in point, researchers testing PQC on smart cards (a common form factor for secure radio modules or authentication tokens) found that Falcon signatures couldn’t even be generated on a typical smart card due to RAM and speed limitations, and SPHINCS+ signatures were too large to handle, whereas Dilithium just barely fit within the device’s capabilities. In that study, only Dilithium could be implemented on current-generation smart card hardware without exceeding memory or timing constraints. This suggests that some legacy hardware in MCC ecosystems might need upgrades or hardware acceleration to use PQC effectively. For instance, an older radio with a slow microcontroller might struggle to perform a Dilithium signature for every push-to-talk authentication, unless its firmware is optimized or it offloads crypto to a co-processor.

It’s not just end devices – network infrastructure like routers, base stations, and controllers must also handle PQC. Imagine a secure tunnel between an incident command center and a field base: today it might use IPsec with RSA for key exchange. Replacing that with a PQC key exchange (Kyber) should be straightforward in theory, but the network appliance’s software must support the algorithm, and the added message size might slightly delay the tunnel setup. If thousands of such tunnels re-key periodically, any increase in CPU or bandwidth usage could accumulate. Initial tests of PQC in TLS and VPN protocols are encouraging, showing that algorithms like Kyber and Dilithium can often be integrated with “minimal performance impact” if properly engineered. For example, Google and Cloudflare’s 2019 experiment with post-quantum TLS found that a hybrid ECDH+PQC key exchange added only a few milliseconds and bytes to the handshake. Nonetheless, careful performance evaluation is needed for MCC, because even small inefficiencies or latency spikes might not be acceptable during critical operations (imagine an encryption delay causing a dispatch message to arrive late).

Beyond raw performance, there are migration and compatibility challenges. Upgrading to PQC will touch many layers: cryptographic libraries, communication protocols, certificate formats, hardware tokens, and operational procedures. Software compatibility is a concern – many PQC algorithms have different input/output sizes that existing protocols weren’t designed for. Consider X.509 certificates (widely used in PKI for MCC networks): they currently accommodate RSA or ECDSA signatures. Inserting a Dilithium signature (which is much larger) might exceed size limits in some legacy systems or at least require protocol extensions. Standards bodies like the IETF are actively working on such updates (e.g., defining new TLS cipher suites for PQC, new certificate types, and hybrid key exchange mechanisms). MCC operators will eventually need to adopt updated standards or proprietary solutions to handle PQC data formats.

On the hardware side, devices that support pluggable crypto (e.g., via firmware upgrade) will need software updates, whereas some hardware might be too constrained to ever support PQC without replacement. A radio manufactured 15 years ago with a fixed crypto chip might simply lack the ability to implement Kyber unless a newer model is deployed. This raises budgeting and logistics issues: rolling out new hardware to first responders or utility field teams is costly and slow. Yet ignoring those old devices means leaving a weak link in security. There is also the operational challenge of ensuring continuity during migration. MCC networks can’t just go offline for a “crypto upgrade”; they must continue running and interoperate between old and new crypto modes. We’ll discuss strategies for integration and coexistence in the next section.

In summary, PQC algorithms offer viable replacements for vulnerable cryptography, and standards are on the horizon to formalize their use. But adopting them in mission-critical environments will involve trade-offs. Performance testing, hardware upgrades, and protocol adaptations are all part of the transition. History has shown that crypto migrations are slow – past transitions (e.g. from 3DES to AES, or SHA-1 to SHA-256) took 5–10+ years to fully implement in industry. We are likely facing a similar journey for PQC, and in the MCC domain it will require especially careful planning to ensure that security is enhanced without compromising the reliability and real-time operation of these critical systems.

Integration, Bridging, and Interoperability Considerations

Upgrading a national-scale MCC network (for example, a country-wide public safety radio system or a large utility’s communications network) to quantum-safe cryptography is a complex systems-integration project. These networks are often a patchwork of different technologies, vendors, and generations of equipment – all of which need to continue working together throughout the transition. A central challenge is maintaining interoperability between legacy (classical) cryptographic systems and new quantum-resistant systems. In practice, there will be a lengthy period where not all components or partner agencies upgrade at the same time. How can an MCC operator introduce PQC without breaking communications with those still using classical crypto?

One recommended approach is to use hybrid cryptographic modes and bridging techniques that allow classical and quantum-safe systems to co-exist securely. In a hybrid scheme, one combines a conventional algorithm with a PQC algorithm in a single operation. For example, a VPN tunnel could perform two parallel key exchanges: one with ECDH (classical) and one with Kyber (post-quantum), and use the resulting keys in combination. This way, the tunnel is secure as long as at least one of the algorithms remains secure. Even if an adversary eventually cracks one, the other still protects the confidentiality of the session. Hybrid key exchange is a belt-and-suspenders strategy recommended during the transition period. Standards are emerging to support this; the IETF has published RFC 8784 (for IKEv2/IPsec) and similar drafts for TLS that define how to carry multiple key exchange payloads (classical + PQC) in one handshake. MCC operators upgrading secure tunnels, VPNs, or application-layer security can leverage such standards to ensure backward compatibility (the classical part ensures older endpoints can connect) while adding quantum safety (the PQC part secures against future quantum adversaries).

Another aspect of bridging is at the level of credentials and identity management. Consider digital certificates and signatures used in MCC PKIs: how do we transition those to PQC? One solution is hybrid certificates, where (for instance) a device could be issued a certificate that contains both an ECC public key and a PQC public key, each signed by the authority. Legacy systems would ignore the PQC portion and validate the ECC signature as usual, while upgraded systems could require the PQC signature or validate both. This allows a phased rollout of new trust anchors. ETSI’s quantum-safe migration guidelines suggest that new trust infrastructure (CAs, certificate chains) might be needed to introduce PQC into an existing PKI, or alternately dual-signing/hybrid chains can be used if the existing software is agile enough to handle them. The key is careful coordination: one cannot simply swap out a root certificate from RSA to Dilithium overnight without breaking devices that don’t recognize Dilithium signatures. Instead, running classical and PQC systems in parallel for some years – with cross-signatures or bridging gateways – will be necessary.

An example of a bridging gateway could be a secure proxy that translates between classical and quantum-safe encryption. Imagine a scenario where two agencies need to communicate, but one has upgraded to PQC-only radios and the other still has legacy AES-only radios. A gateway node could receive PQC-encrypted traffic, then re-encrypt it with classical AES for the legacy side (and vice versa). This is not ideal for the long term (because the gateway must be trusted with the plaintext), but as an interim interoperability fix in mission-critical scenarios, such stopgaps might be employed in limited scope. It underscores the importance of planning the upgrade path such that critical interoperability links (like mutual aid channels, or inter-department networks) are addressed early and tested thoroughly.

Engaging vendors and suppliers early is crucial to integration success. Most MCC operators depend on third-party vendors for their communication equipment and cryptographic software. The reality is that organizations cannot fully implement PQC until their vendors support it in products – be it the radio firmware, the VPN appliance, or the CAD (computer-aided dispatch) software. A recent industry advisory bluntly states: “Vendors of encryption tools and devices must update their products to support PQC. Organizations cannot secure their systems without their vendors first providing the necessary cryptographic libraries, certificates, and protocols.” Therefore, a key integration task is working with vendors on timelines and roadmaps for PQC-capable upgrades. Many large vendors (Cisco, Motorola Solutions, Ericsson, etc. in the MCC space) are already part of standards efforts and have prototype solutions, but customers need to push for concrete support. This can involve contractual discussions (e.g. ensuring maintenance agreements cover “crypto agility” updates), participating in vendor beta programs for PQC features, and even joint testing. Technology refresh cycles for mission-critical systems are typically long, but when a refresh is planned (say a new generation of network hardware), the RFPs should include requirements for quantum-resistant cryptography or at least the ability to plugin new algorithms (crypto agility).

Speaking of cryptographic agility – this is a design property that facilitates integration of new algorithms. Systems that were built with agility in mind (i.e. algorithms are not hard-coded but can be changed via configuration or updates) will handle the PQC transition much more smoothly. Unfortunately, not all legacy MCC systems were designed this way. An ETSI report notes that unless a system has some crypto agility, it can be “very difficult to re-use” any of its building blocks for new algorithms. Going forward, a best practice is to demand crypto agility in all MCC network components. In the integration phase, one should inventory which parts of the system are agile and which are rigid. For the rigid parts (e.g. a device that only supports AES-256 and nothing else), the options are either a firmware upgrade from the vendor to add agility, or ultimately replacement.

Lastly, national and cross-organizational coordination will significantly influence integration. Public safety communications often span city, state/provincial, and federal systems that interconnect. Planning quantum-safe upgrades might involve joint working groups or task forces to align strategies. It’s analogous to the coordinated moves in the past to standardize on AES encryption for radios (phasing out older ciphers) – it required consensus and scheduling among many stakeholders. Similarly, integrating PQC might be done in phases across regions to avoid a patchwork where some areas are secure and others are wide open. During integration testing, operators should simulate mixed environments (some nodes PQC-enabled, some not) to ensure fail-safe behavior. The bridging period will likely last years; hence robust strategies to handle dual cryptographic modes will be a staple of MCC network operations during that time. In summary, integration is about compatibility and continuity: introducing quantum-safe mechanisms in a way that enhances security without interrupting mission-critical connectivity. By using hybrid approaches, demanding vendor support, and carefully coordinating interoperability, MCC operators can navigate this complex transition successfully.

Best Practices for MCC Quantum Readiness

Preparing mission-critical communications networks for the post-quantum era is a multi-year journey that should start now. Below, we outline best practices and a roadmap that MCC operators and cybersecurity teams can follow to assess and mitigate quantum risks. These practices emphasize cryptographic agility, proactive planning, and coordination – all essential to ensure communications remain secure and resilient in the face of quantum breakthroughs.

Form a Quantum-Readiness Task Force: Begin by establishing a dedicated project team or working group responsible for your organization’s quantum security transition. This team should include stakeholders from network operations, cybersecurity, IT, and procurement. Clearly assign roles and responsibilities – for example, someone to lead cryptographic inventory efforts, someone to liaise with each major vendor, etc. Having a focused team ensures the effort doesn’t fall through the cracks and that there is ownership to drive the roadmap forward. Executive support is important too, given that budget and policy decisions will be needed along the way.

Conduct a Comprehensive Cryptographic Inventory and Risk Assessment: As emphasized earlier, you can’t fix what you haven’t identified. Undertake a thorough cryptographic discovery across all systems – radios, network gear, applications, databases, user devices, etc. Document all instances of encryption, key exchange, digital signatures, and the algorithms/key lengths in use. Don’t forget to include operational technology (OT) components if, for instance, your MCC network interfaces with SCADA systems or IoT sensors (common in utilities). Once the inventory is compiled, perform a quantum risk assessment: identify which assets are using quantum-vulnerable crypto and gauge the impact if those were compromised. Focus on “crown jewels” first – e.g., the confidentiality of dispatcher-to-unit communications, or the integrity of emergency alerts. This will help prioritize where to apply mitigations and upgrades first. The inventory also sets a baseline to measure progress. (Note: This process may reveal some quick wins – for example, upgrading any remaining uses of AES-128 to AES-256 or doubling RSA 2048 to 4096 bits can provide interim security improvements against current brute-force or Grover’s algorithm threats.)

Adopt Cryptographic Agility and Future-Proofing Measures: Strive to make your communications architecture as crypto-agile as possible. In practical terms, this means updating software, firmware, and configurations to support multiple cryptographic algorithms or be easily upgradable. For instance, if your radio management software currently assumes a specific algorithm (say ECDSA for authentication), work with the vendor to patch it so that it can accept either ECDSA or Dilithium signatures, perhaps via a config setting or dual validation mode. Ensure that any new system or device you procure explicitly supports PQC or at least has a roadmap to support it – this can be a criterion in RFPs (“must be upgradable to NIST-approved PQC algorithms”). Also, design redundancy such that if a crypto component fails or needs change, it can be done without system-wide downtime. Embracing principles of modularity (separating cryptographic modules from business logic) and open standards will ease future swaps. Essentially, make your network ready to ‘plug in’ new cryptographic components as they mature, so you are not stuck with a fixed algorithm that grows obsolete.

Engage in Vendor and Industry Collaboration: Open lines of communication with your technology providers about their quantum-safe product strategies. Ask for their PQC roadmaps – for example, when will their base station software support hybrid key exchange? Do they plan firmware updates for older devices or only new models? According to government guidance, vendors ideally should publish their plans and be working on PQC integrations already. If some critical vendor appears to have no plan, that’s a red flag – you may need to pressure them (perhaps in coordination with other customers or regulators) or consider alternative solutions. It’s also wise to coordinate with peer organizations and industry groups. Many sectors have working groups for quantum readiness (for instance, national public safety communications councils, utility cyber working groups, etc.). Sharing information on pilots and testing can reduce duplicated effort. By working with standards bodies and industry consortia, MCC operators can also influence the development of standards to suit their needs (for example, ensuring that a forthcoming ETSI or 3GPP standard addresses mission-critical use-cases). Government agencies may provide support as well – in the US, CISA is developing tools and guidance for automated crypto inventory, and in some countries, grants or funding may be available for critical infrastructure to upgrade security.

Implement Interim Risk Mitigations: While the ultimate goal is deployment of PQC, there are steps to mitigate quantum risk in the interim. One is to increase key sizes and strengthen classical crypto where possible (as noted, use AES-256 instead of 128, use SHA-384/512, and use the longest acceptable RSA/ECC keys). This won’t stop a quantum attack indefinitely, but it raises the bar and buys time (for instance, an RSA-4096 key is harder for a quantum computer to crack than RSA-2048, though not quantum-proof). Another mitigation is encrypting data in layers – e.g., combine link encryption and end-to-end encryption, so an attacker has to break multiple keys. Critically, identify any data-at-rest repositories that are secured with weak encryption and re-encrypt them with stronger algorithms, since stored data is subject to harvesting attacks. Also, if you haven’t already, ensure perfect forward secrecy (PFS) is enabled on all session protocols (TLS, etc.), so past sessions remain secure even if long-term keys are later broken. PFS won’t stop a quantum attacker from breaking the key exchange itself eventually, but it prevents one key compromise from exposing large amounts of historical data. These measures can reduce the window of vulnerability during the transition period.

Test PQC Implementations in Pilot Environments: Before widescale deployment, set up pilot projects or lab tests of PQC within your network. For example, you might equip a subset of devices (say, a dozen radios and a base station) with a trial firmware that implements a PQC-secured voice channel, and observe the performance and any issues. Or test a PQC-enabled VPN between two critical sites using early software libraries (OpenSSL has incorporated some PQC algorithms in experimental builds, for instance). Pilot testing will reveal practical issues – perhaps a certain algorithm causes too much latency, or key management for the new scheme is cumbersome for administrators. Use these insights to adjust your plan. It also helps build technical expertise on your team; the engineers become familiar with PQC operations and can develop standard operating procedures. Some operators are also choosing to run “crypto agility drills”, where they simulate the swap-out of one algorithm for another in their systems to see how quickly and safely they can do it. This prepares them for the day a real change (like a sudden vulnerability discovery) must be handled.

Develop a Phased Migration Roadmap: Create a formal roadmap document for migrating to quantum-safe cryptography, broken into phases with target dates. For example, Phase 1 might be inventory and testing (present to 2025), Phase 2 introduction of hybrid crypto in core network links (2025–2026), Phase 3 upgrading end-user devices (2027–2029), and Phase 4 phasing out legacy algorithms entirely (e.g. 2030 and beyond). Include milestones such as “upgrade PKI to support PQC certificates by Q2 2026” or “all new devices from 2025 procurement onward use PQC.” Having a timeline helps coordinate with stakeholders and measure progress. Keep the roadmap updated as technology and standards evolve – it’s a living document. Importantly, incorporate contingencies: if a breakthrough in quantum computing happens faster than expected, how will you accelerate the plan? Conversely, if an expected standard or product is delayed, how will you adapt? The roadmap should also consider policy and compliance milestones – for instance, any regulatory deadlines (some governments may mandate that critical infrastructure be quantum-safe by a certain year). By planning in phases, you can show incremental improvement (which is useful for leadership buy-in) and ensure no aspect is overlooked.

Integrate Quantum-Readiness into Business Continuity and Disaster Recovery Plans: Since MCC networks are all about crisis communications, it’s fitting to incorporate quantum threat preparedness into your existing resilience planning. Update your risk register to include the quantum computing threat (sometimes dubbed “Q-Day” risk). Ensure that business continuity plans consider the scenario of cryptographic failure – for example, if suddenly your VPN certificates were cracked, do you have an alternate secure channel? Part of being quantum-ready is being able to pivot quickly if a sudden cryptanalytic advance is announced. This might mean having pre-vetted PQC configurations on standby. Just as Y2K preparations involved contingency plans in case systems failed at 2000, Q-Day prep means thinking ahead to maintain operations if an emergency re-key or algorithm switch is needed. Conduct drills or tabletop exercises focused on cybersecurity emergency response that includes a quantum dimension (e.g. “what if an adversary decrypts intercepted traffic? How would we detect and respond?”). This will improve overall security readiness in addition to quantum-specific readiness.

Following these best practices, MCC operators can steadily progress toward a state of “crypto agility” and quantum resilience. It’s worth noting that early preparation is not wasted effort: many steps (like inventory, crypto hygiene, and agility improvements) have immediate security benefits, reducing the risk from classical attacks and easing other upgrades. By starting now, operators give themselves the maximum lead time to adapt slowly and safely, rather than rushing at the last minute. As NSA and others have noted, a successful PQC migration will take time to plan and execute, so every bit of early work lowers the eventual operational impact. The end goal is that, when large quantum computers finally emerge, mission-critical communications have been systematically hardened – ensuring that even the most powerful new adversaries cannot compromise the confidentiality, integrity, or availability of critical communications.

Global Perspectives and Standardization Efforts

The push for quantum-safe communications is a global endeavor, involving standards bodies, government agencies, and industry consortiums around the world. MCC operators can take confidence that a broad ecosystem is working on the problem – but they also need to stay informed and aligned with these external efforts. This section provides a brief overview of key international initiatives and recommendations related to quantum security, with an eye toward their impact on mission-critical communications.

NIST and International Standardization: The U.S. National Institute of Standards and Technology (NIST) has led the charge with its PQC competition and standards, which have global influence. The first set of NIST PQC standards (FIPS 203, 204, 205 as discussed) is expected to be formally published in 2024. Many countries and industries are planning to adopt these algorithms as a baseline for security. In parallel, the IETF (Internet Engineering Task Force) is updating core Internet protocols to support PQC – for example, there are IETF drafts and RFCs for adding PQC to TLS, SSH, and IKE (VPN) protocols. MCC networks that rely on standardized protocols will benefit from these enhancements once they are finalized. It’s wise for MCC operators (or their tech vendors) to participate in pilot interoperability events and standardization discussions if possible. For example, the IETF has held PQC hackathons to test multi-vendor interoperability of PQC in TLS; similar engagements in the MCC domain (perhaps via forums like 3GPP for LTE/5G mission-critical services, or IEEE for land-mobile radio) will be valuable.

NSA and National Security Guidance: In 2022, the U.S. National Security Agency released the Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), which outlines the future quantum-resistant algorithms to be used for protecting classified and defense systems. This suite basically mirrors the NIST selections (using lattice-based schemes for public key encryption and signatures) and provides a roadmap for U.S. National Security Systems to transition. Notably, NSA’s guidance and FAQ emphasize that post-quantum cryptography is the preferred solution for protecting communications long-term, rather than quantum key distribution (QKD). NSA views QKD as expensive and limited in applicability, whereas robust mathematical encryption (PQC) is more cost-effective and easier to integrate into existing networks. This stance is relevant for MCC operators deciding where to invest – for most, the practical path will indeed be classical networks upgraded with PQC, not exotic quantum physics links. Other nations’ security agencies have issued similar guidance. For instance, the U.K. National Cyber Security Centre (NCSC) published whitepapers and a blog “Preparing for Quantum-Safe Cryptography” which echo that organizations should plan for adoption of PQC and that doing so, while conceptually straightforward, is a “very complicated undertaking” requiring careful management. The complexity arises from exactly the issues we discussed: interoperability, new standards, and the sheer ubiquity of vulnerable cryptography.

ETSI and European Initiatives: The European Telecommunications Standards Institute (ETSI) has been proactive on quantum-safe cryptography for years, hosting an annual Quantum-Safe Cryptography (QSC) conference and publishing reports. ETSI’s Industry Specification Group (ISG) on Quantum-Safe Cryptography has released guidelines on migration, best practices, and threat assessment (e.g., ETSI TR 103 619 on migration strategies, which we referenced) that provide a comprehensive framework for organizations to follow. These often highlight the need for crypto agility and hybrid solutions, as well as considerations for compliance and business continuity during transition. The European Union Agency for Cybersecurity (ENISA) has also produced reports such as “Post-Quantum Cryptography: Current State and Quantum Mitigation” (2021) and an Integration Study (2022). ENISA’s guidance reinforces that the transition doesn’t end with picking algorithms; integration into existing protocols and systems is key, and they recommend approaches like hybrid encryption (adding PQC as an extra layer alongside classical crypto) as a pragmatic step. European countries under the EU are expected to follow suit in adopting NIST’s PQC algorithms (once standardized) and have collaborative projects (e.g., Germany’s BSI and France’s ANSSI have both funded research and issued technical guidance on PQC). MCC operators in Europe should align with ENISA’s recommendations and any national guidelines to ensure consistency, especially if their network spans multiple countries or cooperates internationally (e.g., NATO military communications, which are also analyzing quantum-proof requirements).

ITU and Quantum Communications: The International Telecommunication Union (ITU) has taken an interest in both PQC and QKD as part of future network standards. ITU-T Study Group 17 (Security) and SG13 (Future Networks) are working on a series of recommendations in the ITU Y.3800s range that cover network support for QKD and quantum-safe encryption. For example, ITU-T Y.3800 is an overview of networks supporting QKD, and further standards aim to define interoperability and management for quantum key distribution networks. While QKD (which uses quantum photons to exchange keys with theoretically perfect security) is not likely to replace public crypto on a broad scale, it could be relevant for certain high-risk MCC links (such as connecting major data centers or command facilities) where fiber-based QKD could add an extra layer of security. ITU’s efforts mean that down the line, there may be internationally agreed interfaces for combining QKD with classical networks. It’s still early, but MCC operators should keep an eye on these developments if their threat model warrants quantum physics-based defenses in addition to PQC. The main takeaway is that global standards bodies acknowledge the quantum threat to communications and are actively developing solutions on all fronts – pure mathematics (PQC) and quantum physics (QKD) – to secure future networks.

Collaboration and Information Sharing: Globally, there is a push for collaboration on quantum readiness. The U.S. has formed a National Quantum Coordination Office and issued National Security Memoranda (NSM-10 in 2022) calling for a unified approach to quantum-resistant cryptography across government and critical industries. This includes timelines for federal agencies to inventory and migrate by set dates (many aiming for completion by 2035, with major progress in the late 2020s). In Europe, there are similar joint efforts, and countries like Germany, Netherlands, China, Japan, and others have significant funding in quantum tech and quantum-safe communications R&D. Conferences and forums like the ETSI IQC QSC conference, RSA Conference (which now has PQC tracks), and academic workshops (PQCrypto, etc.) are excellent places for MCC tech professionals to stay up to date. Also, organizations such as the Quantum Economic Development Consortium (QED-C) in the US and the Global Risk Institute in Canada have published “quantum threat timeline” reports and risk assessment guidelines that can help justify and plan quantum security investments.

In summary, the global community is mobilizing to address quantum cyber threats. Standards for PQC are coalescing (with NIST at the helm and others aligning), and guidance from bodies like NSA, NIST, ENISA, and ITU provide a clear direction: start now, emphasize crypto agility, and transition to quantum-safe algorithms over the coming decade. For MCC operators, aligning with these standards and timelines is important. It ensures interoperability (you don’t want to choose a non-standard algorithm that others won’t accept) and likely compliance with future regulations. By following best practices in concert with global efforts – essentially “reading off the same sheet of music” – mission-critical communications networks worldwide can collectively achieve resilience against the quantum threat. The challenge is significant, but with meticulous planning and international cooperation, mission-critical systems can be made quantum-ready well before large quantum computers go online, thus safeguarding the crucial communications that human lives and national security depend upon.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven professional services firm dedicated to helping organizations unlock the transformative power of quantum technologies. Alongside leading its specialized service, Secure Quantum (SecureQuantum.com)—focused on quantum resilience and post-quantum cryptography—I also invest in cutting-edge quantum ventures through Quantum.Partners. Currently, I’m completing a PhD in Quantum Computing and authoring an upcoming book “Practical Quantum Resistance” (QuantumResistance.com) while regularly sharing news and insights on quantum computing and quantum security at PostQuantum.com. I’m primarily a cybersecurity and tech risk expert with more than three decades of experience, particularly in critical infrastructure cyber protection. That focus drew me into quantum computing in the early 2000s, and I’ve been captivated by its opportunities and risks ever since. So my experience in quantum tech stretches back decades, having previously founded Boston Photonics and PQ Defense where I engaged in quantum-related R&D well before the field’s mainstream emergence. Today, with quantum computing finally on the horizon, I’ve returned to a 100% focus on quantum technology and its associated risks—drawing on my quantum and AI background, decades of cybersecurity expertise, and experience overseeing major technology transformations—all to help organizations and nations safeguard themselves against quantum threats and capitalize on quantum-driven opportunities.
Share via
Copy link
Powered by Social Snap