Upgrading OT Systems to Post‑Quantum Cryptography (PQC): Challenges and Strategies
Table of Contents
Introduction
The advent of powerful quantum computers poses a serious threat to today’s cryptography. Public-key algorithms like RSA and ECC – widely used to secure firmware updates, device authentication, and VPN connections – could be cracked by a cryptanalytically relevant quantum computer (CRQC), undermining core security controls.
Operational Technology (OT) environments, such as industrial control systems and critical infrastructure, are especially at risk due to their long-lived devices and infrequent updates. Many OT systems deployed today will still be in use a decade or two from now, well within the timeframe experts anticipate quantum attacks to become practical. The most critical OT systems will likely be the last to become quantum safe due to strict change procedures, long patching or replacement times, verification and validation requirements, and often their inability to handle some of the PQC compute and bandwidth requirements.
In short, CISOs with OT responsibility must begin quantum-proofing now, even as immediate cyber threats persist.
Quantum Threats and OT Vulnerabilities
The quantum computing threat to cryptography.
Quantum computers leveraging Shor’s algorithm will be able to factor large integers and compute discrete logarithms exponentially faster than classical computers, breaking the security of RSA, DSA, ECDSA, Diffie-Hellman, and other common public-key schemes.
This ability to quickly solve complex algorithms intended to safeguard systems means quantum machines can factor and decipher PKI-based algorithms and bypass security controls that underlie today’s digital trust. In effect, a sufficiently advanced quantum computer can masquerade as trusted devices, tamper with data undetected, or decrypt confidential communications. Experts project such CRQC-capable systems could emerge in the 2030s, but nation-states are acting now. Adversaries may already be harvesting encrypted data now to decrypt later when quantum decryption becomes feasible, putting long-term sensitive data at risk today.
Why OT environments face special risk
OT platforms historically have been less reliant on cryptography than IT systems – many legacy industrial devices use little to no encryption – but paradoxically this can make them more vulnerable to quantum threats. The cryptography that does exist in OT tends to protect extremely critical functions (trusted firmware, remote access links, safety systems, etc.), and a quantum compromise in those few areas could have disproportionate impact.
Furthermore, OT often connects with IT networks, so if an attacker breaks cryptography on the IT side (e.g. steals credentials or VPN keys), they can pivot into OT. In short, OT security inherits all the quantum-exposure of its IT links, and any instance where OT does use public-key crypto (for encryption, signing, authentication, etc.) becomes a high-value target for quantum-enabled adversaries.
Consequences of cryptographic failure in OT
Unlike IT data breaches (which mainly threaten confidentiality), broken crypto in OT can directly impact physical operations and safety potentially putting lives, well-being and the environment at risk. For example, if an attacker uses a quantum computer to forge a digital signature or certificate, they could upload malicious firmware to a controller or masquerade as an authorized engineering workstation. Table 1 outlines key OT use cases for cryptography and how they could be undermined by quantum attacks:
OT Use Case | Example Scenario | Impact if Broken by Quantum |
---|---|---|
Firmware Signing & Secure Boot Ensuring that devices only run authenticated, untampered firmware. | Programmable logic controller (PLC) firmware is signed by the vendor’s RSA-2048 key and verified at boot. | Attacker forges a signature on malware firmware (since RSA is broken), causing devices to boot compromised code. This defeats safety controls and can establish persistent, stealthy malware in OT systems. |
Secure Firmware Updates Validating software/firmware patches before installation. | Field devices accept over-the-air updates only if signed by the OEM’s private key (ECC P-256). | A quantum-capable adversary could craft a fake update package with a valid-looking signature, tricking devices into installing rogue firmware. This could spread widely as devices trust the forged update. |
Device Identity & Authentication Using digital certificates or keys to authenticate devices and users. | An operator’s laptop and a remote SCADA gateway exchange certificates (X.509) to establish a TLS session into an OT network. | If the certificate’s public-key (ECC/RSA) is broken, an attacker can impersonate devices or users. They could gain unauthorized remote access to OT control systems by spoofing identities or performing man-in-the-middle attacks. |
VPNs and Remote Access Tunnels Encrypting OT traffic over untrusted networks (IPsec, TLS, etc.). | A VPN concentrator at a plant uses a Diffie–Hellman or RSA-based handshake to secure connections from engineers working remotely. | Quantum decryption of the handshake would let an eavesdropper decrypt OT traffic in real-time. Worse, by forging auth credentials, an attacker could remotely connect into the OT network, potentially issuing control commands or causing unsafe states. |
Control System Protocol Security Protecting integrity/confidentiality of OT communications. | Modern ICS protocols like OPC UA or Secure Modbus use TLS and certificate-based authentication between HMIs and PLCs. | Breaking the session’s cryptography allows undetected manipulation of commands and sensor data in transit. An attacker could alter actuator setpoints or falsify readings, leading operators to take wrong actions or damaging processes – all while appearing normal. |
Physical and safety implications
The stakes for OT are high – successful quantum-enabled intrusions could disrupt or even physically destroy industrial processes. Examples include disabling safety instrumented systems, causing equipment overload, or manipulating power grid controls. Even a failed attempt can trigger emergency shutdowns or require extensive incident response, leading to costly downtime. In sectors like energy, manufacturing, water, and healthcare, such impacts could cascade into national critical functions.
At the same time, practitioners caution that basic cyber hygiene in OT should not be overlooked. Many legacy OT devices lack any authentication or run on default credentials – problems that need fixing regardless of quantum threats. CISOs should approach PQC as one layer of a comprehensive OT security program. Measures like network segmentation, strict access control, and intrusion monitoring (defense-in-depth) can significantly mitigate quantum-related threats by limiting an attacker’s reach even if crypto fails.
In summary, OT leaders must balance near-term risk reduction with long-term quantum resilience, ensuring that the fundamentals (asset inventory, patching, segmentation, incident response plans) are in place as a foundation for post-quantum upgrades.
Cryptographic Use Cases in OT Systems
Unlike enterprise IT, where encryption and digital certificates are ubiquitous, OT environments historically operated in isolated networks and often relied on physical safety controls. However, modern industrial systems increasingly incorporate cryptographic mechanisms – especially as they interconnect with IT and IoT systems. Below are the common cryptographic use cases in OT and examples of each:
Secure Boot and Firmware Authenticity
Virtually all new industrial controllers, IoT sensors, and PLCs support some form of secure boot or signed firmware. A hardware root-of-trust or bootloader uses a digital signature (RSA/ECDSA today) to verify that firmware/software has been signed by the authorized vendor and hasn’t been tampered. This prevents unauthorized code from running on the device.
In practice, OEMs maintain a signing PKI (public key infrastructure) to sign firmware updates and initial factory firmware. Example: A smart meter’s boot ROM holds an OEM root public key (2048-bit RSA); on each boot it checks the next-stage firmware’s RSA signature and halts if verification fails.
In OT, firmware signing is critical to block malware like Stuxnet from installing itself as persistent firmware.
Software/Firmware Updates and Patches
Beyond the boot process, cryptography also secures the distribution of firmware and software updates to field devices. Many vendors ship update files (packages, binaries) signed with the company’s private key, which devices or update servers verify before applying. Some update systems use code signing certificates (X.509) chained to a Certificate Authority. Example: A SCADA vendor might issue a patch as a signed ZIP or use Authenticode for Windows-based HMIs. The signature ensures authenticity and integrity of the update.
In OT contexts with rare patch cycles, even one malicious update could be catastrophic, so signing is standard. However, challenges arise in applying patches (devices may be hard to reach or require downtime for reboot). This limited patchability means many OT systems run outdated crypto libraries – making a quantum transition even harder if updates cannot be readily installed.
Device Identity and Mutual Authentication
Some industrial protocols and architectures use device-level identities, often implemented via digital certificates or unique cryptographic keys per device. For instance, OPC UA (a machine-to-machine communication standard) mandates that each server and client has a certificate for establishing secure sessions. These identities are used for mutual TLS authentication, encryption, or signing of critical messages. Example: An electric substation RTU (remote terminal unit) holds a device certificate (perhaps an ECC certificate issued by the utility’s PKI) which it presents to authenticate to the SCADA master system.
Similarly, control system components might use certificates to join an IPsec VPN or to authenticate firmware plugins and logic programs. The identity use case often ties into access control – e.g. only PLCs with the right certificate can connect to a safety controller.
If those credentials are compromised (e.g. via quantum attack on the cert’s keys), an adversary could impersonate trusted devices on the network.
Secure Communications (VPNs, Protocol Encryption)
With the convergence of IT/OT, it’s now common to see encryption used for data in transit in OT. This ranges from VPN tunnels connecting remote sites, to application-layer encryption in industrial protocols. For example, engineers might access control networks via an IPsec or OpenVPN tunnel terminated at a plant firewall. Inside the plant, protocols like IEC 60870-5-104, DNP3 Secure Authentication, IEC 62351 for SCADA, and OPC UA provide options for TLS or object encryption to protect telemetry and commands. Even proprietary fieldbus systems (PROFINET, EtherCAT) increasingly incorporate encryption/authentication add-ons. All these use classical cryptographic algorithms (RSA/ECC for key exchange and authentication; AES for bulk encryption). Example: An offshore oil platform’s control center connects to onshore HQ via an IPsec VPN using 3072-bit RSA for key exchange and AES-256 for encryption.
If quantum decryption breaks the RSA, the confidentiality and integrity of all commands and sensor data in transit could be lost – an eavesdropper might silently record or alter critical signals (though note, symmetric AES-256 would not be directly broken by quantum, but the key exchange to establish it would be).
Access Control and User Authentication
OT personnel increasingly use cryptographic tokens or certificates for accessing control systems. For instance, a maintenance engineer might authenticate to an HMI using a smart card (with an embedded RSA key) or sign commands with a digital signature in high-assurance systems.
Moreover, many industrial sites integrate with corporate Active Directory or IAM systems – which rely on protocols like Kerberos and TLS that themselves use cryptography. Thus, OT is indirectly dependent on those algorithms as well. If an organization’s AD certificate infrastructure or RDP/TLS connections are rendered insecure by quantum attacks, adversaries could gain direct access to operator workstations or engineering consoles, bypassing normal authentication.
It’s important to emphasize that not all OT systems currently implement strong cryptography. Older PLCs may have no encryption on protocols (Modbus, classic PROFIBUS), and some industries still lean on air-gapping rather than crypto.
However, where modernization and regulations have taken hold (power grids, transportation, critical manufacturing), the trend is toward more cryptographic protections in OT – all of which will require quantum-safe replacements. For example, the ISA/IEC 62443 standards for OT security stress using encryption and digital signatures to ensure data confidentiality and system integrity in industrial control systems. Today’s implementations of those requirements are typically not quantum-resistant.
Why PQC Migration Is Harder in OT
Migrating an enterprise IT system to new cryptography is challenging; doing so in an OT environment is orders of magnitude harder. Several inherent characteristics of OT make the transition to post-quantum cryptography particularly fraught:
Long Asset Lifecycles
Industrial equipment and control systems are built to last for decades. It is common to find OT components running for 10-20 years or more (e.g. PLCs in power plants, building HVAC controllers, medical devices like MRI machines).
Some extreme cases like nuclear plant systems or avionics may require operation of decades-old technology due to certified safety requirements. Long-lived systems and critical infrastructures being deployed today might be impossible to upgrade later, especially if PQC requires more compute or memory than the hardware has.
In practice, this means many devices in the field cannot simply be replaced or overhauled when quantum threats materialize – they must be retrofitted or otherwise protected, a daunting prospect.
Limited Patching Windows & Rigid Update Processes
Unlike IT software that can be frequently updated, OT systems often have infrequent, tightly controlled update cycles. Some industrial devices can only be taken down for maintenance once or twice a year (or less for critical infrastructure).
Even when updates are possible, they may require extensive re-testing and certificatio due to complicated process interdependencies or sensitive environments.
This leads to a conservative approach where operators avoid updates unless absolutely necessary. As a result, many OT endpoints run outdated or unpatchable software. Such legacy stacks may lack support for new crypto (or even the ability to load new libraries), making the introduction of PQC algorithms extremely difficult without a full system upgrade.
Protocol and Hardware Constraints
OT protocols and devices often have constrained designs that assume specific cryptographic primitives (if any at all). For instance, a fieldbus communication might have a fixed message size or timing requirements that cannot tolerate the larger key sizes and slower performance of many PQC algorithms. A classic example is the limited packet sizes in some SCADA protocols or low-power microcontrollers that struggle with computationally intensive tasks.
Many current PQC candidates (like lattice-based schemes) have significantly larger key/signature sizes or require more CPU cycles than RSA/ECC. In OT, where a microcontroller might have an 8-bit CPU or 256 KB of RAM, implementing a heavy post-quantum algorithm may be infeasible without hardware changes.
Thus, protocol rigidity (lack of extensibility) and device resource limits pose a real barrier. Backward compatibility requirements also mean you can’t just swap out algorithms: a PQC-upgraded component might no longer communicate properly with legacy peers if they don’t understand the new crypto format. Ensuring interoperability during the transition – possibly needing hybrid modes – is especially tricky in OT where protocols are standardized and change slowly.
Weak Visibility into Cryptographic Usage
Many organizations lack a detailed inventory of where and how cryptography is used across their OT environment. Crypto functions may be buried in embedded firmware, proprietary protocols, or vendor black boxes. Because OT has traditionally been managed by engineering teams rather than IT security, documentation of crypto (if it exists) is often siloed or absent.
This “cryptographic blind spot” makes planning a PQC migration risky – you can’t remediate what you don’t know about. For example, an operator might not realize that a certain PLC model uses an RSA-1024 key for firmware verification that cannot be changed without a physical hardware swap. Or there may be hardcoded certificates in field devices that nobody has tracked.
Without comprehensive visibility (sometimes called a Cryptographic Bill of Materials or CBOM), an organization could easily overlook critical vulnerabilities. We’ll discuss CBOMs more in the next section, but it’s worth noting here that establishing this inventory is especially hard in OT due to device heterogeneity and network isolation.
Governance, Safety, and Certification Hurdles
Changes in OT are not just technical; they often require navigating regulatory and safety compliance. Upgrading cryptography in a medical device or an avionics system might necessitate recertification by authorities, which is costly and slow. Additionally, OT changes typically undergo rigorous change management with extensive documentation (especially in industries like pharma or power with strict oversight).
Governance procedures can significantly delay crypto upgrades. For instance, inserting a new root certificate into a substation device might require approved downtime, a method statement for the change, and possibly regulatory notification if it’s part of critical infrastructure.
These procedural burdens mean that even once PQC solutions are available, rolling them out enterprise-wide in OT could be a multi-year project. Organizations that plan ahead (e.g. budgeting for new equipment, scheduling upgrades in maintenance cycles, training staff) will be far better positioned than those reacting after a CRQC appears.
Given these constraints, it’s clear why OT environments may lag in the quantum transition. A worrying scenario is if PQC adoption in OT drags behind the threat timeline – critical infrastructure could become the “weak link” even if IT systems have migrated.
There is precedent here: OT often still runs obsolete Windows versions or unpatched firmware years after fixes are available, due to the factors above. The quantum risk amplifies this gap – an OT system that hasn’t upgraded its crypto by the time quantum attacks emerge could be immediately vulnerable, with no quick fix available. This is why experts recommend starting with no-regret preparatory steps now, such as inventory and crypto-agility measures.
Discovering and Inventorying OT Cryptography
The first step in any cryptographic transition is knowing what you have. For OT-heavy organizations, conducting a cryptographic inventory across all systems is an essential (and non-trivial) task. This inventory should identify every instance of encryption, digital signatures, key exchange, and certificate usage in your environment – including the algorithms/protocols in use and where they reside (device firmware, applications, network links, etc.). Practically, this means examining things like:
Secure Boot Chains
Document the chain of trust in device boot processes. For each device type, determine: Does it verify firmware signatures? What algorithm and key length are used (e.g. RSA-2048, ECDSA-P256)? Is the root-of-trust key stored in hardware (TPM, secure element, fuses) and can it be updated?
Many devices follow NIST’s Platform Firmware Resiliency guidelines (SP 800-193) which recommend an immutable Root of Trust that verifies firmware and can trigger recovery mechanisms. Identify those root keys and signature algorithms – these will need PQC replacements or parallel verification in the future. For instance, if your turbine controller uses an RSA-2048 signature on bootloader code, note that down (algorithm, key length, where stored). Tip: Leverage vendor documentation or open standards; e.g., UEFI-based systems often use RSA signatures for UEFI Secure Boot, and some now support hash-based signatures as per NIST SP 800-208.
Firmware Update Mechanisms
Inventory how each product receives and validates updates. Is there a code signing certificate embedded? What crypto is used for update packages (perhaps an RSA signature or an HMAC with a shared key)? Determine whether the device can accept new trust anchors via updates or if the keys are fixed at manufacturing. This will inform how you might introduce PQC – e.g. a device that can’t change its verification algorithm might instead require an external wrapper (discussed later).
If possible, perform cryptographic bill of materials analysis on firmware images: some tools or services can scan firmware binaries for cryptographic constants (like RSA moduli, certificate files, or TLS libraries). This can reveal hidden uses of crypto. Example: A PLC’s update manual might reveal it uses an X.509 certificate issued by “Vendor Update CA” to verify patches – that tells you an RSA/ECC key is involved, and you should approach the vendor about PQC plans for that CA.
Network Protocols and VPNs
Map out all network connections involving OT systems and note any encryption/authentication in use. For each VPN, SSH, TLS, or ICS protocol session, record the algorithms (cipher suites, key exchange methods). For example, are site-to-site OT VPNs using classic IPsec with RSA or perhaps using pre-shared keys (which quantum doesn’t break directly)? Are operators RDPing into HMIs with TLS 1.2 (using RSA/ECDHE) or are they on a jump host with Kerberos? On the control network, check if protocols like DNP3, IEC 104, or PROFINET have their Secure variants enabled (and if so, what crypto). Many legacy protocols might be in cleartext – paradoxically that’s one less thing to worry about quantum-wise (no crypto to break), but obviously not a good security practice.
The inventory should highlight where upgrading to PQC will matter (where crypto is present) and also where adding crypto (to previously plaintext channels) might be prudent in conjunction with segmentation to mitigate quantum threats. Tooling: Consider passive network analysis – some intrusion detection systems can identify protocols and crypto in use on OT networks (e.g. identifying a TLS handshake and extracting the cipher suite). This can complement manual documentation.
PKI and Certificates
Compile an inventory of all digital certificates and keys used in the OT environment. This includes device certificates (like those for OPC UA servers, embedded web interfaces, or 802.1X network access), any internal CAs that issue certs to OT devices or users, certificates used by third-party vendors for code signing, etc. For each, note the algorithm (RSA 2048, ECDSA P-256, etc.) and expiration.
Many OT orgs have a central PKI for enterprise but may also have ad hoc certificate deployments for certain subsystems (such as a vendor-supplied certificate for a skid or a cloud connection from an IoT device). Knowing all these will allow planning for their quantum-safe replacements.
It’s wise to label which certificates are “high value”, e.g. those that if forged would have major impact (like a code-signing CA versus an individual user VPN cert). A surprising number of OT operators discover certificates they weren’t aware of – for example, a building automation system might have a default self-signed cert that all devices trust for updates.
Cryptographic Libraries and Modules
At a deeper level, inventory the crypto libraries and modules present in systems (OpenSSL, mbedTLS, hardware security modules, etc.). This matters because some libraries might already offer PQC algorithm support or can be upgraded, whereas others (especially in firmware) may never get updated. If you know, for instance, that a line of RTUs uses an old version of OpenSSL that only supports RSA/ECC, you’ll realize those units will likely never directly support PQC without major re-engineering.
In contrast, if a system uses a modular HSM or TPM 2.0, there may be firmware updates from the vendor to add PQC support in the future. Engage vendors for this information – it might require NDA or support contracts, but understanding the crypto under the hood will be invaluable.
One helpful concept here is the Cryptographic Bill of Materials (CBOM). Similar to an SBOM (Software BOM) that lists software components, a CBOM is “a structured inventory of the cryptographic elements built into a software or device”. A CBOM typically enumerates algorithms (e.g. “Uses RSA-3072, ECDSA-P256, AES-128”), cryptographic libraries (like “OpenSSL 1.1.1”), and key materials/certificates in the product. For example, a vendor might provide a CBOM for a smart grid device showing it contains an ECC P-256 key for device identity, supports TLS 1.2 with specific cipher suites, and uses SHA-256 for secure boot hashing.
CISOs should push suppliers to provide CBOMs, particularly for firmware and update mechanisms. A “Firmware-Update CBOM” would explicitly list all cryptographic checks in the update process – e.g. “Firmware signature: ECDSA P-256, Bootloader Root Key: embedded in ROM, Update channel encryption: none.” This information is crucial for planning how to implement PQC (you might decide to wrap that firmware with a Dilithium signature if the device itself can’t verify one, as described later). Some industry initiatives, like the CycloneDX SBOM standard, are adding support for CBOM to formalize this reporting.
Performing the inventory is not a one-time task – it should become an ongoing process. A CBOM alone is a static snapshot; organizations should maintain a living cryptographic inventory that captures not only what could be used, but what is actually configured and in use in their environment. For instance, your devices might support AES-256 and SHA-256, but are configured with AES-128 and SHA-1 – an inventory should reflect the operational reality.
Building this comprehensive view will likely require a combination of automated scanning, vendor questionnaires, and manual effort by engineers. The payoff, however, is significant: you’ll be able to pinpoint which assets are most vulnerable (e.g. a certain VPN using RSA-1024 with no upgrade path) and which can be tackled later. It also sets you up for crypto-agility – the ability to swiftly replace algorithms.
In summary, CISOs should treat cryptographic inventory in OT with the same rigor as asset inventory or vulnerability management. Know your secure boots, your keys, your protocols. This knowledge underpins all subsequent migration efforts – you can’t plan mitigations or negotiate with vendors effectively if you’re in the dark about what needs protecting. Once armed with an inventory, you can move on to devising upgrade strategies for each area of exposure, which we will explore next.
Strategies for Upgrading Legacy OT to PQC
With a cryptographic map of your OT landscape in hand, the next step is plotting a course to upgrade or protect each component against the quantum threat. In many cases, ripping out a legacy system and replacing it with a quantum-resistant one is not feasible (or at least not immediately). Instead, organizations will need to adopt creative mitigation techniques and interim solutions to extend the security of legacy crypto until true PQC replacements can be implemented. A layered approach is key: for critical use cases like firmware integrity, you might add a quantum-safe control alongside the existing one (belt-and-suspenders) to cover gaps.
Crypto-Agility and Hybrid Certificates
Before diving into specific upgrades, it’s important to design for crypto-agility. Crypto-agility means building your systems in a way that algorithms can be swapped or added with minimal disruption. In an OT context, this could mean using abstraction layers (e.g. PKCS#11 interfaces or configurable cipher suites) rather than hard-coding algorithms, and allowing devices to accept multiple algorithm choices. Many governments and standards bodies now explicitly mandate crypto-agility. For example, NSA’s Commercial National Security Algorithm (CNSA) Suite 2.0 guidance (for U.S. national security systems) calls for introducing post-quantum algorithms in a hybrid mode as an interim step, with the first target use case being software and firmware signing by 2025. Hybrid mode generally means using PQC alongside traditional algorithms so that security holds unless both are broken. The simplest form in practice is hybrid certificates and dual signatures:
Hybrid Certificates
A hybrid certificate is an X.509 digital certificate that contains two sets of public keys and signatures – one using a classical algorithm and one using a post-quantum algorithm. The certificate is structured so that legacy systems can ignore the PQC part (since it’s in non-critical extensions) and continue to work with the classical key, while PQC-aware systems can use the new key.
For example, a hybrid TLS certificate might have an RSA-2048 key and a CRYSTALS-Dilithium key in the same certificate. During the transition, a server could present this certificate; old clients verify the RSA signature as usual, whereas upgraded clients also verify the Dilithium signature.
The idea is to facilitate a gradual migration: no “flag day” where everything must switch at once, thus avoiding a situation where “all clients must support the new algorithms before deployment”. Hybrid certs have been standardized in ITU-T X.509 (the so-called “Alternative” or “Catalyst” certificate format) and are under discussion in IETF and ISO standards.
From an OT perspective, hybrid certificates could be very useful for device identities and VPN credentials – you could reissue your internal CA certificates as hybrid (e.g. ECC + SPHINCS+) so that as you upgrade clients and devices, they can start validating the PQC part without breaking older devices. Keep in mind that hybrid certs increase certificate size (which could be an issue in constrained protocols) and require careful coordination with your PKI software (CA and issuance tools). Nonetheless, they are one of the most straightforward transitional techniques available.
Composite/Dual Signatures
Beyond certificates, any digital signature used in OT (firmware signatures, documents, etc.) can be made hybrid by generating two signatures with two algorithms and requiring both to be valid. For instance, when signing a firmware image, you could produce an ECDSA signature and a Dilithium signature; the device (or an external verifier) would check both. Initially, you might still rely on the classical signature (since the device might not understand Dilithium), but you have the PQC signature ready for later or for audit purposes.
Some systems might choose a threshold approach – e.g. accept a new firmware if at least one of the signatures verifies. However, for true security during the transition, the ideal is AND-composition (require all signatures valid) so that an attacker must break both the classical and PQ algorithms. This dramatically raises the bar: even if RSA succumbs to quantum, the quantum-resistant signature still protects you (and vice versa). The downside is PQ/hybrid increases latency and complexity – essentially a double transition (first to hybrid, then later dropping the classical part). In OT, that latency is usually not a big deal for offline operations like firmware verification, but it could impact real-time protocols if used there.
A real-world example of composite signing: in 2023, some software vendors began dual-signing updates with both RSA and Dilithium (or Falcon), anticipating future validation of the PQC signature when standards settle. For OT firmware, CNSA 2.0 explicitly encourages starting to dual-sign firmware immediately, with the goal to make PQC signatures the default by 2025. NIST SP 800-208, which approves stateful hash-based signatures (LMS, XMSS), is similarly focused on code signing use cases (secure boot, updates) as early adopters of PQC.
It’s worth noting that not all authorities agree on hybrids for the long term. Europe’s ENISA and many experts see hybrid schemes as a pragmatic path in this decade, whereas some U.S. guidance (e.g. NSA for some national security systems) suggests moving to pure PQC as soon as practicable. For OT, our recommendation is to embrace hybrid approaches in the near term because they offer compatibility and risk mitigation. OT systems are simply not agile enough to do a one-shot cutover. By using hybrid certs and dual signatures, you can start inserting PQC into the environment now (testing PQ algorithms, building trust in them, and ensuring your infrastructure can handle them) without disabling critical legacy support. Over time – perhaps by 2030 or so – you may then migrate to PQC-only mode once you’re confident all components can handle it and classical crypto is truly obsolete.
Quantum-Safe Firmware Signing (Wrapping and Out-of-Band Validation)
Firmware and boot code integrity is one of the highest-priority areas to safeguard with PQC, because a forged firmware could give an attacker persistent control of a device. Indeed, U.S. federal initiatives identify software/firmware signing for secure boot as the first mandatory quantum-safe use case (ahead of even TLS). But how do we retrofit a legacy device that only knows, say, RSA-2048 signatures, to be secure against quantum attacks? There are a few approaches:
“Wrapping” Firmware with PQ Signatures
If the device itself cannot be easily upgraded to verify a new signature type, one strategy is to add a layer around the firmware that is quantum-safe. For example, you might distribute a firmware update package that includes a quantum-resistant signature (e.g. Dilithium or XMSS) in addition to the normal signature. The legacy device will ignore the PQ signature (since it doesn’t know how to interpret it) but you arrange for another component to check it. This could be done by a trusted update service or gateway.
Consider a scenario: a field device’s update agent sends the new firmware to the device only if the PQC signature is valid (the agent runs on a more powerful system or cloud service that has PQC support). The device still verifies its normal signature as usual; the combination means an attacker would need to forge both, or compromise the update service. In effect, you’re enforcing the PQ check out-of-band.
Another variant is a “cryptographic wrapper”: one could prepend a small bootstrap loader to the firmware image that contains code to verify a PQ signature (using device’s limited capabilities) before handing off to the main firmware. If the device can run that bootstrap (and it fits in memory), this custom wrapper could supplement the built-in secure boot. This is complex and device-specific, though.
A more straightforward example could be an OT firmware could be signed in the traditional way and then have a PQ signature applied, effectively double-enveloping the code.
Stateful Hash-Based Signatures (XMSS, LMS)
These are a special class of PQC signatures that rely only on hash functions (making them very trusted and conservative security-wise). NIST SP 800-208 recommends the LMS and XMSS algorithms specifically for code signing and secure boot applications. These signatures are large (tens of KB) and each key can only produce a limited number of signatures, but they have minimal runtime overhead and are approved even by NSA for high-assurance signing.
If you control the firmware signing process (e.g. you are the product vendor or you manage an in-house device firmware), you could start using LMS/XMSS to sign firmware in parallel with existing schemes. For instance, sign all new firmware with both ECDSA and LMS. You then publish the LMS public key through some secure channel (maybe as a certificate in a firmware metadata file). Even if devices in the field don’t know how to verify LMS yet, you have the PQ signatures available so that in the future – or under forensic analysis – you can prove the firmware’s integrity against a quantum attacker. Some security-conscious organizations are already doing this.
The U.S. National Security Agency (NSA) even requires stateful hash signatures for certain high-security firmware by 2025. For OT operators purchasing equipment, it’s reasonable to ask your vendors: are they planning to adopt LMS or XMSS for code signing? If a vendor’s roadmap includes NIST SP 800-208 compliance, that’s a good sign. These algorithms, being hash-based, are very resilient (security relies on SHA-256 or similar), but they do require careful state management by the signer to not reuse keys. This is mostly a burden on the vendor’s signing process, not on your deployed devices, except that device verification code would need an update to support LMS/XMSS if you want the devices themselves to enforce it eventually.
Out-of-Band Validation Devices
An alternative approach for situations where device firmware or secure boot can’t be touched is to use an external hardware validator. For instance, one could deploy a monitoring appliance on the OT network that checks the integrity of critical controllers. Such a device might periodically read the firmware (via JTAG, or via firmware read commands) and validate a PQC signature or hash against a known-good. If a discrepancy is found, it can trigger an alert or even cut power to the equipment. Think of it as a sentinel that compensates for the device’s lack of quantum resistance. This is somewhat analogous to certain white-listing approaches used today (where an external system monitors PLC code changes). While it doesn’t prevent initial compromise, it could detect if an attacker tried to load quantum-forged firmware.
Another example: in PC BIOS security, researchers have added external FPGA-based checkers that verify firmware using independent keys. We can adapt that concept to OT. If you have especially high-value assets, you might add a secondary device in-line that performs its own signature check on any code before execution. For instance, an endpoint security controller chip could verify the firmware using a PQ signature while the main CPU still checks the old signature – providing a layered defense.
Version Agility & Fallback
As you start introducing PQC-signed firmware, plan for safe fallback. There may be cases where a device receives a PQC-signed update that it doesn’t recognize and rejects (if the mechanism isn’t perfectly out-of-band). You want to avoid bricking devices.
Techniques like dual-firmware banks (so you can always revert to an older firmware) or threshold signing can help. Threshold signing in this context could mean, for example, the device will accept a firmware if it has a valid classical signature or if two different PQ signatures (maybe by two authorities) validate. By setting up threshold requirements, you can roll out PQC gradually – initially requiring just one of two signatures, and later upping the requirement.
This is complex and currently no OT vendor implements such logic to our knowledge, but conceptually it could be done via firmware update to the device’s verifier. A simpler approach is “bake-off” testing: trial PQC-signed updates on a small subset of devices or in a lab replica to ensure they apply correctly, then scale up.
In summary, the guiding principle for firmware is: don’t trust a single signature scheme. If today that single scheme is RSA/ECC, augment it with something quantum-safe. Whether that means dual-signing, wrapping, or external validation, the goal is to prevent an attacker with a quantum computer from pushing unauthorized code to your devices or bypassing secure boot. By 2030, we expect many OT devices will natively support PQC verification (especially new ones being designed with crypto-agility), but until then, these defensive layers will be crucial. Look to standards like NIST SP 800-193 (Platform Firmware Resiliency) and IEC 62443-4-2 for general guidance on maintaining firmware integrity – then ensure those integrity mechanisms are strengthened for the quantum era by incorporating PQC in their process (per NIST SP 800-208 and NSA CNSA timelines).
PQC for Network Protection (VPNs, Protocols, and Proxies)
Another major domain to upgrade is communications security. This includes VPN tunnels between sites, remote access into plants, and encryption within industrial protocols (like OPC UA, MQTT, etc.). These typically use public-key cryptography for key exchange (e.g. TLS handshakes, IPsec IKE) and for authentication (digital certs). To quantum-proof these channels, consider:
Deploy Quantum-Resistant VPN Options
Several VPN and secure communication vendors have started offering “quantum-resistant” modes for TLS and IPsec, usually based on hybrid key exchange. For example, a TLS 1.3 handshake can be configured to do an X25519 (ECDH) plus a Kyber-768 (lattice PQC) key exchange, deriving a shared secret that is safe unless both algorithms fail. There are prototypes and early standards (like IETF’s draft for hybrid key exchange in TLS).
If you manage your own VPN infrastructure (OpenVPN, StrongSwan, etc.), look into incorporating Open Quantum Safe libraries or using products from companies like Cisco, Microsoft, AWS that have announced PQC support in VPNs. Even if your OT devices aren’t ready, you can implement PQC on the external tunnels – for instance, between two gateways that then feed into the legacy OT network.
This way, the long-haul communication (which might be intercepted and recorded by adversaries) is protected with PQC, limiting the “harvest now, decrypt later” risk. Note: Be mindful of performance – PQC key exchanges like Kyber are quite efficient (comparable to RSA) so they often can be done even on moderate hardware without issue, but testing is prudent.
Use Crypto-Agile Gateways/Proxies
If your end devices or PLCs cannot support new cryptographic algorithms, an intermediate system can act as a proxy that speaks PQC on one side and classical on the other. For example, suppose you have an OPC UA server in a plant that only supports RSA certificates. You could put a proxy in front of it (maybe in the DMZ) that clients connect to using a post-quantum TLS cipher suite; the proxy then connects to the OPC UA server using the server’s existing TLS. The proxy essentially “translates” or terminates the connection, so the external link is quantum-safe. This is similar to how TLS offloading works with load balancers – here you’re doing PQC offloading. Of course, the proxy becomes a critical component and must be secured itself, but it’s easier to upgrade one proxy than a dozen embedded devices. Over time, you can retire the proxy when devices get native PQC support.
This approach – sometimes called a quantum-secure gateway – can also be applied to protocols that don’t have any crypto (e.g., wrapping an insecure protocol in a PQC-secured tunnel transparently).
Upgrade Certificates in Control Protocols
Some industrial protocols allow custom certificates (for example, OPC UA lets you bring your own certificates for servers/clients). You can begin issuing larger key certificates or hybrid certificates for those systems. OPC UA based on TLS could use a hybrid certificate as discussed, or if using its own security modes, one might generate PQC keys for those modes (future versions are expected to support them). If devices can’t handle it yet, you might still use larger classical keys in the interim (e.g., use 4096-bit RSA or 521-bit ECC to marginally increase security against quantum until PQC is ready).
Note that increasing classical key sizes only delays the quantum break slightly and has diminishing returns – PQC algorithms are the only long-term solution. However, as an interim risk reduction, some regulators suggest moving to “quantum-harder” settings (like AES-256, SHA-384, RSA-3072) which align with NSA’s CNSA 1.0 guidance. Many IEC 62443 deployments already call for AES-256 and RSA-3072 for high levels; ensure you meet those at least, as a baseline.
Segment and Limit Exposure
While not a cryptographic upgrade per se, strong network segmentation and access control can minimize the impact if an encrypted link gets broken. CISA emphasizes network segmentation as “particularly effective in preventing vulnerabilities from post-quantum cryptographic breaches by restricting attacker access”. For instance, even if an attacker could decrypt a historian’s traffic, proper segmentation would prevent them from pivoting to control devices. Use firewalls, demilitarized zones (DMZs), and unidirectional gateways where appropriate.
Also, reduce unnecessary cryptographic exposure: if a device doesn’t truly need internet connectivity or remote access, turn that off – you can’t attack what you can’t reach. This “minimize your quantum attack surface” approach is recommended by ENISA and others as a near-term mitigation. It buys time by making it harder for an adversary to exploit broken crypto, because they’d need network footholds they might not have.
Overall, for communications, prioritize upgrading external and inter-site links first, since those are most likely to be intercepted by sophisticated adversaries. Internal control network encryption can follow, ideally via upgrades from vendors. Keep an eye on standards like IEEE 802.1X / MACsec, IPsec/IKEv2, and TLS 1.3+ as they evolve to include PQC – many are in draft stages now. The U.S. National Institute of Standards and Technology (NIST) is working on guidelines (e.g. NIST SP 800-217 draft) for using post-quantum algorithms in TLS and IPsec; those will provide configurations to follow once finalized. In the meantime, using hybrid modes as described ensures that when quantum-safe algorithms are standardized, you can switch to them smoothly without massive reconfiguration.
“Thin-Slicing” PKI and Trust Anchors
PKI thin-slicing refers to an incremental strategy for updating the public key infrastructure that underpins trust in OT systems. Instead of replacing an entire CA hierarchy at once (which could invalidate every certificate in use), you “slice” the migration into smaller segments. Some techniques include:
Introduce PQC at a New Intermediate CA Level
Suppose you have a traditional two-tier PKI (Root CA → Device Certificates). You could add a new intermediate CA that is signed by the existing Root, but uses a PQC keypair to sign device certs. Devices that trust the Root will transitively trust certs issued by this new PQC intermediate (assuming they can handle the algorithm or you issue hybrid certs). This way, new devices or those that can be updated will get certs from the PQC intermediate (perhaps composite certs: Root signed with RSA, intermediate signed with Dilithium). Legacy devices can continue using the older intermediate for now. Over time, you “slice off” the legacy branch and eventually have the Root sign only PQC intermediates.
This method allows using your existing trust anchors while injecting PQC in the middle. It requires that endpoints can be programmed to handle the new intermediate’s algorithm, which might involve software updates if they validate signatures. But it avoids having to distribute a brand new root trust to every device immediately.
Cross-Certification or Bridge CA
In some cases, you might establish a parallel PQC PKI and cross-certify it with the old one. For example, you create a new PQC Root CA and issue a cross-certificate from the old Root (and vice versa). This makes each PKI trust the other. Then you start issuing new certs from the PQC Root (which carry the cross-cert so old devices trust them). This is complex but can be used if, say, one vendor moves to a new PQC-based CA and you as an asset owner need your devices (with old trust stores) to accept those. The cross-cert says “the old CA trusts the new CA” so devices don’t reject the new signatures. Eventually, the cross-cert can be dropped when all devices have been updated to trust the new PQC CA directly.
Multiple Signatures on Root Keys (Threshold Schemes)
When distributing trust anchors (like firmware root keys or CA certs in devices), consider using threshold or multi-signature schemes for resilience. For instance, a device could be provisioned to require that a firmware is signed by at least 2 out of 3 trusted keys: those keys could be (1) the vendor’s RSA key, (2) the vendor’s PQC key, (3) perhaps an owner-controlled key. This way, even if RSA is later considered unsafe, as long as the PQC key signs the firmware and is one of the 2 required, the firmware is trusted. Threshold ECDSA/PQC combinations are a research area, but conceptually it’s implementing a logical AND/OR of signatures.
In practice, implementing this on existing devices would need a firmware update to their signature validation logic. However, asset owners can push vendors for such capabilities in new designs. Vendor threshold signing might also refer to internal vendor practices – e.g. requiring that multiple approvers sign firmware (using possibly different algorithms or HSMs) to guard against single-key compromise.
If you operate long-lived systems, insist in contracts that “vendor firmware updates must be signed by a quantum-resistant algorithm (in addition to existing algorithms) by year X”, and that the vendor maintain secure multi-party control of signing keys (so they are less likely to be compromised by classical means as well).
Key Management and Crypto-Agile Tools
Ensure that your certificate management systems (or those of your suppliers) are being updated for PQC. For instance, if you use an enterprise PKI product, check if it supports issuing certificates with PQC algorithms or hybrids. The same goes for any embedded HSMs or TPMs in OT gear – will they get firmware allowing PQC? NIST’s standards process is wrapping up initial PQC algorithm standards (for general digital signatures, lattice-based Dilithium and Falcon are likely choices, and for key establishment Kyber is standardized). As those become available, test them out in your environment using available libraries (there are open-source ones from Open Quantum Safe project). Start with non-critical applications: for example, set up a test certificate authority that issues a PQC-based certificate for a dummy device and see if your monitoring systems can parse it, etc. The more you familiarize the team with PQC integration, the smoother the eventual rollout.
The overarching idea with PKI thin-slicing is gradualism – leverage your existing trust where you can, insert PQC in contained ways, and iterate. You don’t want an “off switch” moment where you swap out all trust overnight, because in OT that could lead to widespread authentication failures if anything was missed (imagine hundreds of field devices suddenly rejecting connections because they don’t recognize the new PQC credentials). By phasing it, you localize issues and can roll back if needed.
One real-world example of this principle is how government cybersecurity directives are handling the PQC transition. The U.S. Quantum Computing Cybersecurity Preparedness Act (2022) and follow-on memos require agencies to inventory and then gradually migrate systems, prioritizing high-impact systems and establishing interim milestones (rather than one big deadline).
Engaging Vendors and Supply Chain
Finally, a critical aspect of any OT crypto upgrade is managing your supply chain and vendors. Most OT systems consist of vendor-supplied hardware and software, so you will depend on those vendors to some extent for PQC solutions (especially firmware updates or new device models). Here are some procurement and vendor management practices to facilitate a smoother transition:
Require Cryptography Bills of Materials (CBOMs)
As mentioned earlier, ask suppliers to provide a CBOM for their products. When procuring new devices or renewing support contracts, include language that the vendor must disclose all cryptographic algorithms and keys in the product (sometimes called a “Crypto Component Inventory”). Also require notification of any use of known-weak algorithms. This information will help you identify which products are most at risk and press vendors on specific points (e.g. “Your RTU uses RSA-1024 for secure boot – what is your plan to move to a stronger or PQ algorithm?”).
Some forward-leaning organizations are even asking for “quantum readiness” attestations, where vendors state how and by when they will address quantum-vulnerable cryptography in the product. If a vendor cannot or will not provide such details, that’s a red flag. Fortunately, awareness is growing – an IBM study noted that formal CBOMs can “simplify creation of a cryptography inventory across diverse infrastructure”, making quantum risk management much easier.
PQ-Upgradability Clauses
When negotiating new OT system purchases, include contractual clauses about crypto-agility and PQC support. For example, stipulate that “the system must support replacement of cryptographic algorithms and keys over its supported lifetime”. This could mean requiring that the device firmware is updatable (no hardcoded crypto that can’t be changed), or that the vendor will provide a patch to add PQC algorithms once standardized.
You might also set a requirement like: “Vendor will provide a post-quantum secure firmware signing mechanism by 2025, and if not, the customer may perform escrow signing” – some critical infrastructure operators use escrowed signing keys to sign updates themselves if a vendor goes out of business or fails to deliver. Ensure your contracts don’t forbid you from implementing mitigations (like wrapping firmware) if needed.
Vendor Roadmap Transparency
Push your vendors to publish a quantum-safe roadmap. Many large vendors have started this (e.g., stating which product lines will get PQC algorithm support and in what timeframe). If a vendor has no such roadmap, ask for one. Emphasize that your risk management requires knowing when their product will be compliant with emerging standards (like NIST PQC standards, CNSA 2.0 if applicable, etc.)
For instance, if you operate a railway signaling system, you’d want to know if the vendor plans a hardware refresh or firmware update to introduce PQC by say 2030, or if you’ll need to replace components. Vendors themselves may be dealing with component supply chains (chips, libraries) and might not have immediate answers, but the dialogue is important. In some cases, industries might coordinate (through ISACs or regulators) to get sector-wide roadmaps.
Threshold and Multi-Party Signing by Vendors
For very long-lived deployments (like grid control systems or military OT systems), consider requiring vendors to adopt threshold cryptography for their signing keys. This means the vendor’s master signing key (for firmware or software) is split among multiple HSMs or individuals, and a quorum is needed to produce a signature. How is this relevant to post-quantum? It indirectly helps by reducing the risk that the vendor’s classical signing key gets stolen by conventional means (phishing, insider, etc.) – which is arguably a nearer-term risk than quantum attacks. But beyond that, threshold schemes could be extended such that, for example, a signature is only considered valid if signed by two different algorithms from two independent systems (one could imagine a vendor having an RSA signing system and a Dilithium signing system and requiring both outputs).
As a customer, you might not get into that level of detail, but you can ask for evidence of strong code signing practices (many IoT/OT vendors now have to comply with NIST’s Secure Software Development Framework which includes secure signing and key management).
Firmware Update Policies and Escrow
Work with vendors on how firmware updates will be handled when PQC becomes necessary. If a device is nearing end-of-life, will they at least issue a last update that allows you to trust the device beyond quantum? For example, maybe their final update in 2028 could include the device being configured to accept a customer-supplied root certificate (so you could sign updates yourself with PQC later if needed). Or perhaps they agree to open-source certain code if they cannot support it.
These are tricky negotiations, but some industries have “enduring security frameworks” – for instance, the U.S. DoD often requires vendors to provide sustainment tools to validate/maintain equipment for decades. Frame PQC in that light: it’s part of long-term sustainment.
Leverage Standards and Consortia
Point vendors to standards and guidance that you expect them to follow. For instance, “NIST SP 800-208 and NSA CNSA require LMS/XMSS for code signing – does your product roadmap include adopting those for firmware integrity?”. Or “IEC 62443-4-2 requires cryptographic integrity – how will your solution maintain that with quantum-safe algorithms?”. By showing you are aligning with recognized standards, it adds weight to your requests (it’s not just you being difficult; it’s an industry direction). European operators might reference the ENISA guidance or the EU’s coordinated roadmap, asking if the vendor is aware and engaged with those efforts. In some cases, collaborate with peers – if multiple major customers all ask a vendor about PQC, it will spur action faster. Sector organizations (like utilities, oil & gas, etc.) are beginning to discuss quantum preparedness, which can lead to collective pressure on suppliers.
It must be acknowledged that some legacy products will simply never be made quantum-safe by their original vendors – especially if the product line is discontinued or the company no longer exists. In those cases, your options are: encapsulate it in mitigations (network isolation, external validators, etc.), replace it earlier than planned, or accept the risk for a period (perhaps if it’s a low-criticality device). Portfolio management comes into play: identify which devices are “crypto-dead-ends” and prioritize them for replacement projects. On the flip side, identify which vendors are forward-leaning and could be strategic partners in the PQC journey.
A positive development is that global standards bodies are actively working on the PQC migration challenge. NIST’s National Cybersecurity Center of Excellence (NCCoE) has a draft PQC Migration Playbook (NIST IR 1037) and the U.S. DHS issued a Post-Quantum Cryptography Roadmap with phases for organizations to follow. The EU is similarly funding research and coordination. So as a CISO, you’re not alone – tap into these resources and bring them to vendor discussions. The language of roadmaps, milestones, and risk-based transition resonates with both technical teams and executives at vendors.
Next Steps for CISOs
Upgrading OT systems to post-quantum cryptography is undoubtedly challenging, but with careful planning and collaboration, it is achievable. As a CISO overseeing OT security, you should take a structured approach. Here are practical next steps to begin your PQC migration journey:
- Mobilize a PQC Task Force: Assemble a cross-functional team (OT engineers, IT security, risk management, procurement, and vendor reps) to lead the effort. Ensure everyone is educated on the quantum threat and the importance of acting now, not later. Set clear executive sponsorship so that this team has the authority to gather data and implement changes over multiple years.
- Conduct a Crypto Inventory and Risk Assessment: Start with the discovery process described earlier – document all cryptographic usages in your OT environment. For each instance, assess the criticality and quantum vulnerability: e.g., “PLC secure boot using RSA-2048 – high impact if broken,” or “Historian database TLS using RSA-2048 – moderate impact.” Identify any already-compliant areas (perhaps you have some longer keys or vendor-stated quantum-resistant features) and the glaring gaps (like known weak crypto or unknown algorithm cases). This inventory and assessment will be the foundation of your roadmap.
- Prioritize “Crown Jewels” and Low-Hanging Fruit: Not everything can be fixed at once. Prioritize systems that: (a) perform critical functions where a crypto break would be catastrophic (safety systems, major process control, etc.), and (b) are easiest to mitigate early (for instance, upgrading a VPN is easier than upgrading dozens of field devices). A likely first target is remote access paths and interconnections, since those can expose the OT to external quantum-enabled adversaries. Securing those with PQC or segmentation reduces overall risk. Simultaneously, plan to address firmware signing for high-importance devices in collaboration with vendors (or via the wrapping techniques discussed).
- Develop a Roadmap with Milestones: Create a multi-year roadmap specific to your organization, aligned with guidance from NIST, CISA, and ENISA. Include key milestones such as “Inventory complete by 2025”, “PQC-capable testbed established by 2026”, “Begin deployment of hybrid certificates in 2027”, “All new OT assets PQC-ready by 2028”, etc. This should be a living document. The UK NCSC suggests it may take 2-3 years for a large organization to migrate once serious effort begins, so plan accordingly. Build in time for standardization – NIST’s final standards are expected 2024-2025, and more may come later. Your roadmap can remain algorithm-agnostic until standards finalize (e.g., say “implement NIST PQC algorithm for signatures” rather than naming a specific algorithm now). Ensure to tailor for OT considerations: for example, coordinate with maintenance outages, and include steps like validating PQC in lab environments that mimic industrial conditions.
- Implement Interim Risk Mitigations Now: While working on the long-term fixes, immediately strengthen your posture with classical controls. This means: doubling down on network segmentation (limit connectivity so a quantum attack has less reach), strong access controls (MFA for remote access, vaulting of credentials), patching any easy crypto weaknesses (e.g., eliminate any remaining use of deprecated protocols like SSL/TLS 1.0 or SSH v1, increase any sub-2048-bit keys), and monitoring (so if an attack occurs, you catch it quickly). Also, encrypt sensitive data at rest now with strong symmetric encryption – even if someone steals it and waits for a quantum computer, a robust symmetric cipher (AES-256, for instance) is likely to resist quantum attacks (Grover’s algorithm only weakens symmetrics modestly, effectively halving the key length strength). These measures reduce the urgency and potential impact, buying you time to implement PQC properly.
- Engage in Industry Collaboration and Stay Informed: Join industry working groups on PQC for critical infrastructure. Many sectors have initiatives (often via ISA, IEEE, etc.) where you can share experiences and solutions. Also keep track of evolving standards: for example, NIST SP 800-208 (hash-based signatures) and upcoming NIST SPs for PQC integration will be important, as will updates to IEC 62443 series to incorporate crypto-agility. ENISA’s reports (2021 and 2022) provide good overviews of PQC integration challenges. By staying informed, you can update your roadmap with the latest best practices and maybe even influence vendors with knowledge of what’s coming (for instance, if you know an IETF standard for a hybrid VPN protocol just got approved, you can push your network vendor to adopt it).
- Test and Pilot PQC Solutions: Don’t wait for a big bang implementation – start small pilots. Set up a test network segment with some non-production devices and experiment with PQC algorithms. For example, try using OpenSSL’s post-quantum builds to secure a Modbus tunnel, or test a firmware signing process with an XMSS key. Evaluate performance impacts on low-power hardware if you can (some chip vendors like NXP, Infineon provide sample libraries for PQC on microcontrollers). These pilots will uncover practical issues (like certificate size limits, algorithm speed) and help build confidence. They also provide demonstrators to show leadership and auditors that you’re proactively tackling the issue.
- Update Policies and Procurement Criteria: Reflect quantum readiness in your organizational policies. Your crypto policy might state that any new system must use approved crypto (which soon will include NIST PQC algorithms). Update it to mandate crypto-agility – e.g., “systems must support at least one of the NIST PQC algorithms and be configurable to add future algorithms.” For procurement, as discussed, include requirements and questionnaires about PQC. Also consider lifecycle: if you’re buying a piece of equipment expected to run till 2040, it absolutely needs a path to PQC, otherwise you are buying a ticking time bomb. Make sure your procurement teams know to weigh that in supplier selection.
- Communicate and Educate: Present the plan to executives and get their buy-in by framing it as maintaining resilience and trustworthiness of critical operations in the quantum era. The cost and effort will be justified by the risk – cite those DHS/CISA and EU warnings that not acting could lead to catastrophic breaches in the future. Also educate OT personnel and engineers about what the changes mean for them (for instance, they might need to learn new procedures for certificate updates or monitor slightly different performance after PQC rollout). Building a quantum-ready culture now will smooth the implementation later.
- Plan for Continual Reassessment: Quantum computing is a rapidly evolving field. Set a cadence (say, annual) to revisit your quantum risk assessment. Maybe the timeline moves up if a breakthrough happens, or perhaps some algorithms get cracked and NIST introduces new ones (the PQ field is new – some algorithms might not stand the test of time). Also track any instances of “store-now-decrypt-later” activity relevant to your sector (intelligence communities might signal if adversaries are actively harvesting certain data types). Your plan should be dynamic to respond to such intel.
By following these steps, a CISO can lead their organization on a manageable path to quantum resilience. It’s worth remembering that the goal is not instantaneous perfection, but continuous risk reduction.