Post-Quantum

How to Perform a Comprehensive Quantum Readiness Cryptographic Inventory

Table of Contents

Introduction

Regulators and security standard-setting organizations across industries now increasingly recommend, or even mandate, cryptographic inventories as a first step in post-quantum migration programs. A cryptographic inventory is essentially a complete map of all cryptography used in an organization’s systems, and it is vital for understanding quantum-vulnerable assets and planning remediation.

In theory it sounds straightforward: “list all your cryptography.” In practice, however, building a full cryptographic inventory is an extremely complex, lengthy endeavor. Many enterprises find that even identifying all their IT and OT assets is challenging, let alone uncovering every cryptographic component hidden within those assets. Cryptography often lurks in multiple layers of hardware, software, and firmware, making it difficult to spot.

Despite the difficulty, performing a thorough inventory is expected by major security guidelines as the foundational step toward crypto-agility and quantum readiness. Without it, organizations cannot effectively assess where legacy algorithms (like RSA or ECC) are used and vulnerable, nor plan their transition to quantum-safe solutions. But executing a comprehensive cryptographic inventory is not a trivial “box-ticking” exercise. It demands careful strategy, a combination of automated tools and manual effort, and close coordination across the organization over a long period, often years for large enterprises. Performing this inventory is hard, but it sets the stage for all subsequent steps.

Why a Comprehensive Cryptographic Inventory Is Essential

Virtually every post-quantum security roadmap put out by national and sectoral regulators, security organizations, etc. puts “inventory your cryptography” at the top of the to-do list. That should be your first initiative as you are preparing to address the quantum threat. For good reason – an organization cannot protect or upgrade what it doesn’t know it has. A cryptographic inventory provides visibility into all the algorithms, libraries, keys, and protocols in use across your products, applications, and services. It serves multiple important purposes:

Risk Assessment & Quantum Readiness

By cataloging where quantum-vulnerable cryptography (e.g. RSA, ECC) is used, the inventory provides a clear view of your exposure to cryptoanalytically relevant quantum computers (CRQC). In practice, this goes beyond a simple list – it ties cryptographic usage to business context and data sensitivity. Leading security guidance recommends highlighting where vulnerable algorithms protect high-value or long-lived data. For example, if certain systems handle secrets that must remain confidential for decades (think national secrets or patient records), those systems can be flagged as top priority for post-quantum cryptography (PQC) upgrades. The inventory thus informs a risk-based migration roadmap – focusing remediation efforts first on the cryptosystems whose compromise would have the greatest impact or whose data has the longest “shelf life” against future decryption.

Crucially, an inventory and the risk assessment built on top of it also helps counter the “harvest now, decrypt later” threat. Adversaries may be intercepting and storing encrypted data today, anticipating that a quantum computer will decrypt it in the future. By knowing exactly which assets use legacy encryption and how long the protected data needs to stay secure, you can proactively deploy protections (or interim mitigations) for those assets now. In short, the inventory exposes your “quantum-vulnerable” footprint and enables quantum readiness: you can’t fully assess or address quantum risk if you don’t know where all your susceptible cryptography lies.

Regulatory Compliance

As noted, regulators across industries are increasingly requiring formal cryptographic inventories as part of cybersecurity and quantum readiness programs. For instance, U.S. government directives explicitly mandate federal agencies to maintain a prioritized inventory of their cryptographic systems as they transition to PQC (see OMB Memorandum M-23-02). In fact, the 2022 Quantum Computing Cybersecurity Preparedness Act requires every U.S. executive agency to develop an inventory of quantum-vulnerable cryptography as part of its migration planning. Similarly, a joint CISA/NSA/NIST factsheet in 2023 urges critical infrastructure organizations to “create a cryptographic inventory” as an immediate step toward quantum security. And I’m aware of multiple other national regulators that are planning to issue similar requirements within the next few months. This trend signals that having a documented crypto inventory is quickly becoming a compliance expectation, not just a best practice.

Maintaining an up-to-date cryptographic inventory demonstrates due diligence and governance. It shows auditors and regulators that you have control over your encryption and keys – which can be crucial for satisfying requirements in frameworks like ISO 27001, PCI-DSS, or sector-specific mandates. Organizations that cannot produce an inventory may soon find themselves out of step with regulatory guidance. Notably, security agencies highlight that a crypto inventory can even support broader initiatives like Zero Trust (by identifying all encryption-dependent assets and data flows) and supply chain security. Internationally, other authorities echo this stance: for example, Europe’s cybersecurity agency ENISA recommends that organizations inventory their cryptographic assets, perform crypto risk assessments, and ensure algorithm agility as part of post-quantum preparedness. Bottom line – a comprehensive inventory helps satisfy current mandates and positions you ahead of emerging compliance requirements in the crypto governance space.

Vulnerability Management

Even outside the quantum context, a cryptographic inventory is a powerful tool for day-to-day vulnerability management. When a new weakness in an algorithm or library is discovered, you can instantly pinpoint all systems that rely on that broken component. For example, if a critical flaw like Heartbleed is found in OpenSSL, an organization with a robust inventory can immediately query which applications and devices embed the vulnerable OpenSSL version – and then expedite patches on those systems. Likewise, if standards bodies deprecate an aging algorithm (say SHA-1 or RSA-1024) due to cryptoanalytic advances, your inventory tells you exactly where those algorithms are in use so you can plan their timely replacement. This ability to quickly map a threat to affected assets is key for rapid response.

Without an inventory, organizations often remain unaware of lingering weak crypto in obscure corners of their IT environment. Missing a hidden instance of an obsolete cipher or outdated certificate can leave a gaping hole in security. In contrast, an up-to-date inventory shines a light on these blind spots so they can be remediated before attackers exploit them. In essence, your cryptographic inventory becomes a living catalog that underpins crypto-related patch management: when a vulnerability or compliance issue arises, you’re not scrambling in the dark – you have a map of where to focus updates, configuration changes, or key rotations. This significantly reduces incident response time and ensures no cryptographic vulnerability is overlooked.

Crypto-Agility & Modernization

A comprehensive inventory is the foundation for crypto‐agility – the capacity to seamlessly swap out or upgrade cryptographic components in response to new threats and standards. You cannot hope to be agile with your cryptography if you don’t even know what algorithms and libraries you’re using. In fact, experts consider maintaining a detailed cryptographic inventory a prerequisite for any crypto-agile program. With a clear view of all crypto implementations, you can plan upgrades in a systematic, low-risk way. For instance, if you need to migrate legacy systems to TLS 1.3, or replace all SHA-1 certificates with SHA-256, the inventory pinpoints exactly which applications and devices are affected. This saves time and prevents oversight – the inventory essentially serves as your to-do list for modernization efforts.

In the context of post-quantum migration, crypto-agility enabled by an inventory is especially crucial. The inventory identifies everywhere that vulnerable algorithms (RSA, ECC, etc.) are used, allowing you to prioritize those for retrofitting with quantum-resistant algorithms. It also facilitates smoother coordination with vendors and third parties: you can provide suppliers with a list of crypto dependencies in their products that need updating, or evaluate vendors’ PQC roadmaps against your findings. Many organizations compile the inventory into a formal Cryptographic Bill of Materials (CBOM), which enumerates each system’s cryptographic components and their attributes. This CBOM becomes an invaluable reference for planning and tracking upgrades – ensuring nothing is left behind during a crypto overhaul. In short, a cryptographic inventory makes your security architecture future-proof and agile. It equips you with the knowledge to quickly retire unsafe ciphers, adopt new standards (like NIST’s post-quantum algorithms), and adapt to changes without guesswork or network disruptions. Crypto-agility isn’t achievable without knowing your cryptographic assets; the inventory is what makes “agile crypto” a practical reality for the organization.

In short, a comprehensive cryptographic inventory is foundational for sound risk management and future-proof security. It’s the only way to fully understand your exposure to both current and emerging threats.

However, acknowledging the importance of an inventory is the easy part – actually performing one is far more challenging.

The Challenges of Cryptographic Inventory

If cryptographic inventory were simple, every organization would have done it by now. In reality, several factors make this process exceedingly difficult and time-consuming. It’s crucial to understand these challenges up front.

Cryptography’s Hidden Depths

Cryptography isn’t a surface-level feature you can easily see in a system; it runs as a hidden thread through numerous layers of technology. An application might call encryption functions provided by a third-party library, which in turn rely on cryptographic services in the operating system or hardware (TPM, HSM, CPU instruction sets, etc.). Firmware and embedded systems also perform cryptographic operations under the hood (for example, device bootloaders verifying signatures). This layered stack means many cryptographic implementations are invisible to casual observation. Even the developers or system owners may not be aware of all the crypto happening in their system, because much of it is abstracted away in libraries and components. This “invisibility” is by design – crypto is meant to work seamlessly in the background – but it complicates efforts to inventory it. You must dig into each layer (application, libraries, OS, network, hardware) to uncover all instances. For an illustration of cryptographic complexity, see my post analyzing levels of cryptography involved in a 5G call: “Cryptography in a Modern 5G Call: A Step-by-Step Breakdown.

Incomplete Asset Visibility

A cryptography inventory is only as good as your general asset inventory. Many organizations struggle to maintain a full catalog of all their applications, services, devices, and third-party solutions. Modern IT and OT environments are sprawling and dynamic, including on-premise systems, cloud services, containerized microservices, mobile and IoT devices, PLCs, and more. If you don’t know about a system, you certainly can’t account for its cryptography. Additionally, “shadow IT and IoT” – systems or applications procured outside of official channels – can introduce cryptographic functions unknown to central technology organization. The same goes for cloud services and VMs spun up by developers: these might implement cryptographic controls (e.g. cloud storage encryption) that aren’t obvious to the organization. IoT and OT devices pose another blind spot; these often have built-in cryptography (for communications or firmware integrity) but may not be centrally monitored. All these factors make scoping the inventory difficult – you must cast a very wide net to avoid missing assets.

Legacy Systems and Poor Documentation

In many enterprises, there are legacy systems still running old protocols or algorithms (e.g. outdated SSL/TLS versions, hard-coded DES encryption in an old database, etc.). Often the people who set up those cryptographic mechanisms are long gone, and documentation is sparse. You might encounter a decades-old custom application using a now-obsolete crypto library, without any docs on its cryptographic details. Similarly, third-party “black box” products from vendors may include cryptography that the vendor doesn’t fully disclose (citing intellectual property or security-by-obscurity). For example, an appliance or piece of network hardware might perform encrypted communications or key management internally, but the vendor provides minimal info on algorithms or key lengths used. These vendor black boxes make it hard for the customer to inventory crypto – you may need to press the vendor for details or attempt to reverse-engineer the cryptography indirectly. Even open-source software, while transparent in theory, can be so large and frequently updated that tracking all cryptographic components is non-trivial.

Continuous Change

The IT environment is not static – new applications, updates, and configuration changes are constantly occurring. Cryptography usage can therefore change over time as well. An inventory can become outdated quickly if it’s treated as a one-time project. Systems might get patches that enable new ciphersuites, new TLS certificates are deployed, containers spun up or down, etc. This dynamic nature means the inventory process must account for changes (usually by incorporating continuous monitoring, as we’ll discuss). It’s a challenge to capture a moving target.

Organizational Silos and Knowledge Gaps

Conducting an enterprise-wide cryptographic inventory requires cross-team coordination. Different teams manage different pieces (network ops, application developers, security, OT engineers, etc.), and no single person or team has full knowledge of all cryptographic usage. Silos can hinder the collection of information. Additionally, some staff may not recognize certain functions as “cryptography” (for instance, an admin might not realize that an API token generation involves a cryptographic algorithm). There’s also the human factor of memory and accuracy – people may simply forget to mention a particular setting or assume “that’s someone else’s area.” These gaps mean relying purely on people’s self-reported knowledge will miss things, which is why a heavy manual approach is problematic (more on that below).

Scale and Complexity

Finally, just the sheer scale of modern IT is a challenge. A large enterprise might have thousands of applications and devices. Each could be using multiple cryptographic components. For example, a single web application might: use TLS for external connections, use IPsec or VPN for some internal links, rely on an authentication library that hashes passwords, call a database that encrypts certain fields, and run on an OS that has its own cryptographic policies. Multiplied across all systems, the number of individual cryptographic “items” (certificates, keys, algorithm instances, library calls, etc.) to catalog can be enormous. Ensuring 100% coverage is arduous – but missing even one weak crypto instance could leave a backdoor for attackers. This is why experts emphasize that achieving a truly complete inventory may take years of effort in a big organization.

In summary, cryptography tends to be ubiquitous yet hidden, and organizations often lack the visibility or documentation needed to easily enumerate it. The complexity spans technical, informational, and organizational dimensions. Recognizing these challenges should set realistic expectations: a comprehensive cryptographic inventory is not as simple as scanning a network for open ports. It requires a methodical, multi-layered approach.

The Pitfalls of Manual-Only Approaches

Facing the challenges above, some companies take what appears to be an “easy” route: they conduct the inventory via interviews, surveys, and spreadsheets. For example, they’ll send questionnaires to application owners or hold meetings asking teams “what cryptography do you use?”, then compile the answers in a spreadsheet document. This manual, interview-based inventory approach might seem straightforward and quick to get started – but it is deeply flawed and dangerous if relied upon as the sole method. I wrote more about it previously in “Dos & Don’ts of Crypto Inventories for Quantum Readiness.

Why is a purely manual inventory inadequate? Put simply, people don’t know what they don’t know. Asset owners, developers, and IT personnel can certainly report some obvious cryptographic uses, but they will overlook many instances. As discussed, cryptography is often buried in system layers that even the responsible teams aren’t fully aware of. No amount of good intentions in interviews can overcome the fact that humans have limited visibility and memory of every technical detail. Even a diligent system owner might report “we use TLS 1.2 for these services” and forget that the system also stores some credentials encrypted with a legacy algorithm, or that there’s a scheduled batch job using PGP encryption, etc. Interviews typically yield only a partial view, missing hidden dependencies and less obvious implementations.

Furthermore, collecting data by manual questionnaires leads to inconsistencies and errors. Different respondents might interpret questions differently or use inconsistent terminology (one might say “AES-256” while another says “256-bit SSL” referring to the same thing), resulting in confusion or duplicate entries. Manual spreadsheet data entry is prone to typos and omissions. People might accidentally skip over a system, or mis-classify an algorithm due to misunderstanding. It only takes a few such errors to make the inventory unreliable.

Another major problem is that a spreadsheet is static and quickly becomes outdated. The moment new systems are added or configurations change, a manually compiled spreadsheet will not reflect it. Unless continuously maintained (which rarely happens with manual efforts), the inventory document will drift out-of-sync with reality. This can create a false sense of security, where management believes “we have everything inventoried” but in fact the spreadsheet is incomplete or obsolete. In some ways, an incorrect or illusionary inventory is worse than none at all, because it can breed complacency. Leadership might allocate resources assuming the inventory is thorough, when in truth critical gaps remain.

In practice, relying only on interviews and spreadsheets often turns into a “check-the-box” exercise – it gives the appearance of progress (“look, we filled out our crypto inventory file!”) without real efficacy. As my previous post on this topic noted, this approach can mislead organizations into thinking they are prepared, when in fact they have misguided confidence in an unreliable inventory. Indeed, an inadequate inventory may cause misallocation of effort (addressing things that were easy to find, while overlooking harder-to-find but more critical vulnerabilities).

To be clear, manual information gathering still has a role – human knowledge can supplement automated discovery – but it cannot be the only approach. Interview responses are a starting hint at best, not a source of truth. Comprehensive inventories must go deeper.

Real-world example: An organization that relied on self-reported crypto info from teams found out later that a forgotten legacy process was using an old SHA-1 based signature scheme. None of the interviewees mentioned it because no one person knew it end-to-end. This “hidden” crypto component would have been missed entirely had they not eventually scanned system configurations and discovered traces of the SHA-1 usage. The lesson is clear: if you only ask people, you’ll only get what they remember – which is never the whole story.

In summary, a cryptographic inventory cannot be completed by questionnaires and spreadsheets alone. Those methods are woefully insufficient and even risky. They should be augmented (and largely replaced) by systematic, tool-driven discovery methods that probe the environment more deeply and objectively.

Tools and Techniques for Cryptographic Discovery

Given the limitations of manual efforts, the obvious question is: what tools or automated techniques can help build the cryptographic inventory? In recent years, a variety of specialized tools have emerged, and many security vendors market solutions for cryptography discovery. These range from network appliances that passively sniff traffic for deprecated algorithms, to software that scans code and binaries for cryptographic functions, to agents that monitor applications at runtime. Each category of tool employs different techniques, and each has strengths and blind spots. It’s important to understand the landscape of approaches, because (as we’ll emphasize) no single tool or technique will find everything. A combination of methods is needed for a truly comprehensive inventory.

Let’s break down the main modalities of cryptographic discovery and what they bring to the table:

Static Code Analysis

One approach is to scan source code (or even compiled code) for known cryptographic APIs and patterns. Static code analysis tools can parse through software without running it, looking for usages of cryptographic libraries or functions (e.g. calls to OpenSSL, Bouncy Castle, .NET cryptography classes, etc.) This method is useful for home-grown applications where source is available – it can efficiently flag instances of crypto usage, including hard-coded keys or deprecated algorithms in the codebase. However, static analysis has limitations: it might flag capabilities that aren’t actually used at runtime, and it may miss crypto that is invoked indirectly (for example via reflection or loaded at runtime). Also, if source code isn’t accessible (third-party apps), static analysis might not be possible.

Dynamic Analysis (Runtime Monitoring)

This technique involves observing applications during execution to catch cryptographic operations in real time. For example, an instrumented runtime or an agent can log calls to crypto libraries (like detecting when an application invokes a CryptoAPI or performs an encryption routine). Dynamic monitoring reveals what cryptography is actually being used in practice, including algorithms loaded on the fly or chosen based on data. It helps reduce false positives from static analysis by focusing on real usage. The downsides: it requires setting up the application in a test or live environment with monitoring enabled, which can be complex, and it might not exercise all execution paths (so some crypto usage might not occur during the monitoring window). There’s also potential performance overhead when monitoring.

Passive Network Traffic Analysis

Some tools passively capture and analyze network traffic to identify where encryption is used and what protocols/ciphers are negotiated. For example, a network sensor might detect TLS handshakes and log what TLS version and cipher suite is being used, or identify SSH sessions, IPSec VPN tunnels, etc. This gives a view of cryptography in transit – highlighting outdated protocols (like TLS 1.0) or weak cipher use on the network. Passive network listening is non-intrusive (it won’t disrupt systems, since it’s just observing copies of traffic), which is great for sensitive environments. However, it has limited visibility: it can’t see cryptography that doesn’t manifest in network communications, and if traffic is encrypted, the sensor can’t always tell the details of the encryption inside (beyond the handshake info). In short, network analysis finds where encryption is happening over the wire and flags protocol-level issues, but misses any at-rest or internal crypto usage.

System/Configuration Scanning

This category includes scanning OS settings, application configurations, and filesystems for cryptographic elements. For instance, a scanner might check Windows registry or Linux configs for crypto policies (like accepted TLS versions), or search file systems for known cryptographic libraries (DLLs, JARs) and their versions. It might also detect presence of hardware cryptography modules (e.g. is a Trusted Platform Module present, or is an HSM configured). Configuration scanning can reveal system-wide crypto settings and installed components. Its limitation is that it may not know how those components are used – it just tells you they exist. It also requires adequate permissions to read system settings, which can be an obstacle on locked-down systems.

Dependency and Binary Analysis

A more code-centric approach is analyzing software dependencies and binaries for cryptographic content. Dependency analysis means looking at the libraries an application includes – e.g. does it bundle OpenSSL, or some encryption module? If so, that’s a clue the app could use those algorithms (though not proof it actually does). Binary analysis involves examining compiled executables for byte patterns or symbols that indicate cryptography (for example, constants like S-boxes of AES, or function names like SHA256_Update). This can even uncover crypto in proprietary third-party software where source isn’t available. These techniques are powerful but require specialized expertise and tooling; they can produce false positives (detecting capability vs actual use) and might miss heavily obfuscated code. Still, they help cover scenarios where you have no source code and need to glean clues from the binaries.

Cloud and Infrastructure Scanning

Many organizations leverage cloud services (IaaS/PaaS/SaaS) which have their own cryptographic configurations. Tools geared towards cloud can check, for example, your AWS or Azure environment for use of native crypto services (KMS keys, Key Vault, etc.), or inspect cloud storage to see if encryption-at-rest is enabled, and so on. They might also scan container images or infrastructure-as-code for references to cryptographic libraries. Ensuring your cloud workloads are part of the inventory is important, since they might be using cryptography abstracted by the cloud provider. One challenge is that each cloud platform is different, so tools need specialization for each (and may have limited access due to provider restrictions).

Hardware/Firmware Analysis

In Operational Technology (OT), IoT, and other hardware-heavy environments, you may need to analyze devices for built-in cryptography. This could involve firmware scanning or using hardware introspection tools to see if devices implement things like secure boot, encryption of data at rest, proprietary crypto protocols, etc. It often requires cooperation from device vendors or very specialized testing, given the diversity of hardware and limited interfaces to extract info. But it’s crucial for a full inventory if you have things like smart sensors, industrial controllers, or other embedded systems in scope – these might run older or hard-coded cryptographic algorithms.

Certificate and Key Discovery

Another important aspect is discovering digital certificates, keys, and cryptographic material managed in the organization. Certificates (especially X.509) contain information about algorithms (RSA vs ECC, key sizes, signature algorithms) which need to be inventoried. Tools like certificate management solutions or key discovery tools can scan networks and systems to find certificates and keys in use (in files, keystores, etc.). This is especially relevant for public-key cryptography inventory – e.g. identifying all RSA 2048-bit certificates, all code-signing keys, etc. It’s worth noting that some crypto use might not be an “algorithm call” per se but rather manifested as a certificate or key file deployed somewhere.

Log and Memory Analysis

As a complementary method, organizations can analyze logs for cryptographic events (e.g. SSL/TLS error logs, deprecation warnings about weak cipher use, etc.) Security logs might show if something failed a cryptographic handshake or if an application logged an encryption operation. Memory dump analysis, while advanced, can find cryptographic keys or operations loaded in memory during runtime (useful to catch things that don’t appear in code but only in execution). These are expert-level techniques and can generate huge data, but they can occasionally surface obscure crypto usage.

As we can see, each technique targets a different facet of the IT environment. Some focus on code (static, binary), some on live behavior (network, runtime), others on configuration and artifacts (files, certificates). No single approach is “best” or covers everything. For instance, a network sensor might flag that a particular server is accepting TLS 1.0 connections (indicating old crypto), whereas only a code scan would reveal that the same server’s application also uses an outdated hashing algorithm for passwords. One tool might detect the capability of an algorithm’s presence, while another confirms its usage.

It’s therefore widely recommended to use multiple tools in combination to build the inventory. In fact, many organizations deploy a multi-tool strategy: for example, run a static analysis on in-house code, use a network monitoring appliance for passive discovery, scan systems for known crypto libraries, and use a certificate discovery tool – then correlate all the findings together. Each tool will contribute pieces to the puzzle. Overlap is fine (it serves as validation), and each will also contribute unique findings.

I do want to stress this point: beware of any vendor claiming their tool alone can give a full cryptographic inventory. It’s simply not possible given the myriad ways crypto appears in an environment. In fact, some vendors themselves acknowledge that their tool addresses one layer; for full coverage, they expect clients to integrate results from other sources. As I previously noted, “cryptographic inventory tool vendors may imply their tools provide a complete solution… they can never provide 100% on their own” – only a holistic approach with multiple techniques (plus human oversight) can achieve a truly comprehensive result.

Example Vendor Solutions

To illustrate, please have a look at a few types of solutions on the market and what they specialize in my separate post: “Cryptographic Inventory Vendors and Methodologies.”

Regardless of tool choice, the key is to understand what each method covers and where its blind spots are. Plan to deploy a combination such that one tool’s blind spot is covered by another’s strength. For example, static code analysis + runtime monitoring together can catch both declared capabilities and actual usage. Pairing network analysis with system scanning covers both network communications and at-rest implementations. If you have a lot of third-party software, emphasize binary and filesystem scanning plus vendor engagement, since you can’t instrument those apps easily. If you have custom software, lean on static/dynamic analysis to mine your own code deeply.

Finally, don’t forget to involve the human element smartly: have architects and developers review findings to fill gaps (“manual audits”). Also, engage with vendors of proprietary systems – ask them for cryptography information or SBOM/CBOM data for their products. Many enterprise software vendors are increasingly prepared to answer such questions (some may even provide a cryptography attestation or documentation if prompted, especially as customers ask more due to regulations). You may need nondisclosure agreements in place, but getting the vendor’s input can save a lot of reverse-engineering effort.

The takeaway is that comprehensive discovery requires a multi-faceted toolkit and strategy. By using the right mix of automated tools and manual techniques, you can piece together a much more complete cryptographic inventory than any single approach would yield.

Building the Cryptographic Bill of Materials (CBOM)

Performing all the discovery steps above will generate a trove of data: lists of cryptographic algorithms found, locations in code, network endpoints using certain ciphers, certificates collected, etc. The next step is to consolidate and organize all this information into a usable form – typically, a Cryptographic Bill of Materials (CBOM). The CBOM is a structured inventory document (or database) that enumerates all cryptographic components in your systems, much like a software BOM lists software components.

A CBOM provides a holistic view of where and how cryptography is implemented across the organization. This includes details such as: what algorithms and protocols are used by each system, key lengths and configurations, which libraries or modules implement them, and in which applications or devices they reside. It essentially maps cryptographic entities to the assets/systems using them. For example, a CBOM entry might show that System A (say, an e-commerce web app) uses Algorithm X (RSA-2048 for TLS, located in OpenSSL version Y), and Algorithm Z (BCrypt for hashing passwords, via library version W), etc., along with location references (e.g. config files or code references).

Why is a CBOM important? A well-structured CBOM serves several purposes:

  • Complete Visibility: It is the one-stop reference that shows all cryptographic usage in one place. This helps architects, security teams, and risk managers quickly see what is deployed where. No more guessing or hunting through disparate reports – the CBOM collates it.
  • Regulatory and Governance: As noted, regulators are beginning to mandate crypto inventories as part of security programs. A CBOM format allows you to demonstrate compliance by having a documented inventory that can be shared (with auditors, regulators, or partners) in a standardized way. In fact, the industry is moving toward standard schemas for CBOM data to integrate with broader SBOM (Software Bill of Materials) efforts. For example, the CycloneDX SBOM standard introduced support for including Cryptographic BOM information in its latest version. This means organizations can extend their SBOM practice to cover crypto, making reporting more uniform.
  • Risk Prioritization and Strategy: The CBOM is not just a list; it can embed risk context. Many teams enhance the inventory with tags like “quantum-vulnerable” or “needs upgrade by 2025” based on the algorithms. For instance, the CBOM can flag which items use RSA/ECC (to be replaced for PQC) vs which use only symmetric/AES (less urgent). By capturing metadata such as the sensitivity of data protected by a crypto component or the lifespan needed for that data’s security, the CBOM becomes a tool to prioritize remediation. It helps answer: which crypto usages are most critical to fix first?
  • Resource Planning: Knowing exactly which systems use what cryptography allows for efficient planning of upgrades. The CBOM can be used to scope the effort and resources needed for crypto modernization. For example, if the CBOM shows 40 applications using OpenSSL 1.0 (which is deprecated), that gives a concrete scope for an upgrade project. Without a CBOM, you might under- or over-estimate the work. It also ensures nothing is forgotten during migration – you have a checklist of all items that need attention.
  • Communication and Vendor Management: A CBOM provides a common reference when engaging with product teams or external vendors. If a vendor claims their product is quantum-safe, you can cross-check against your CBOM – does their product’s cryptography (as listed) actually align with quantum-safe algorithms? It also can facilitate supply chain security discussions, by sharing CBOM info with partners (e.g. a cloud provider might one day provide a CBOM of their service’s crypto so customers can integrate that knowledge).

To build the CBOM, you’ll need to aggregate data from all the discovery tools and processes. This often means normalizing different data formats. One practical tip is to use a spreadsheet or database to compile entries, with consistent fields such as: Asset/System Name, Location, Cryptographic Item (algorithm/protocol/library), Details (key size, version, configuration), Quantum-Vulnerability (yes/no), etc. Some organizations use graph databases or specialized tooling to model the relationships (as suggested by the PQC Coalition – envisioning the inventory as a graph of crypto entities and their connections). But a simpler tabular approach can work as long as it’s thorough.

Categorize and validate the data as you compile. Group by algorithm type, by system criticality, or by business unit – whichever views will be useful. It’s also wise to double-check anomalies or unknowns. During inventory, you may encounter items that are unclear (e.g. a binary was flagged as possibly containing “blowfish” algorithm – is it actually used?). These “known unknowns” should be annotated in the CBOM and investigated further. It’s better to mark an entry as “needs confirmation” than to assume. Over time, you resolve those by either further analysis or asking the right people.

Maintaining the CBOM is an ongoing effort. Treat it as a living document (or dataset) that must be updated as things change. This is where integrating with change management processes helps – e.g., require that any new system deployment or major update is accompanied by an update to the cryptography inventory. Automated continuous monitoring tools can feed into the CBOM to catch changes (for example, an agent that detects if a new TLS certificate was installed on a server could trigger an update in the inventory database). Regular reviews (quarterly or semi-annual) can ensure the CBOM stays accurate and keeps up with the environment.

The good news is that standardization of CBOMs is improving. As mentioned, formats like the just released CycloneDX 1.6 now explicitly support representing cryptographic elements. IBM has released a CBOM specification and even open-sourced some tooling to help generate CBOMs, aligning it with SBOM workflows. We can expect more mature tools that automatically output a CBOM from scans, which can then be shared or exported in standard format. This will reduce the manual collation effort over time.

In summary, creating a Cryptographic Bill of Materials is the culmination of the inventory process – it turns raw discovery data into a structured asset for decision-making. A well-built CBOM allows your organization to actually leverage the inventory findings: to drive remediation, inform strategy, and demonstrate control over cryptography to stakeholders. It is a pivotal milestone on the road to crypto-agility and quantum security. With the CBOM in hand, you are far better equipped to plan the next steps, such as transitioning algorithms or strengthening policies.

Practical Execution Challenges (Access, Impact, and Coordination)

Even with the right tools and plan on paper, executing a comprehensive cryptographic inventory in a real-world environment comes with practical challenges. Two major concerns often arise: gaining access to systems for scanning, and avoiding disruption of production operations. Let’s address these, along with strategies to mitigate the issues:

Access to Systems & Data

Many inventory methods (like static code scanning or configuration scanning) require substantial access – for example, read access to source code repositories, permission to install scanning agents on servers, or credentials to log into devices. In a large organization, obtaining such access can be non-trivial. System owners (especially of critical or sensitive systems) might be reluctant to allow new tools to scan their systems due to security or privacy concerns. There may be internal bureaucracy to navigate: approvals from change management boards, coordination with IT operations, etc. Additionally, some systems (particularly in OT/industrial settings) are tightly controlled – you cannot just deploy new software on a factory control system without going through vendor certification or affecting warranties. Negotiating access is thus a critical part of the inventory project. It often requires management support to communicate the importance and to possibly mandate cooperation from various silos. One tip is to leverage existing tools and data where possible: for instance, if certain monitoring tools or configuration management databases (CMDBs) are already in place, see if they can provide some cryptography info without needing new access. But in most cases, you will need to schedule scans or agents with the system owners’ involvement.

Active vs. Passive Scanning (Impact on Production)

A big worry is whether scanning or monitoring will impact system performance or stability. This is especially acute in production environments that are mission-critical (e.g. manufacturing control systems, financial transaction systems) where downtime is unacceptable. Here, the approach must be tailored to minimize risk. One guideline is to use passive, agentless techniques by default on critical systems. For example, instead of installing a heavy agent on a production server, you might rely on passive network monitoring of that server’s traffic (which doesn’t touch the server itself). Or use read-only methods like scanning a backup of the system or analyzing configurations offline. If active scanning (like running an agent or performing a deep scan on a live system) is necessary, plan it carefully: do it in maintenance windows, limit resource usage, and test the scanner in a staging environment first. In many cases, organizations set up a test or sandbox environment that mirrors production – essentially a digital twin of the system – and run the intensive scans there. This can demonstrate what the scan would find and measure performance impact without touching the real production. If a digital twin or full staging environment is not available, consider conducting scans during off-peak hours and monitoring system metrics closely to abort if any issues arise. It’s also wise to coordinate with the system’s vendor – they might have guidance or tools for safely auditing the system’s cryptography.

Operational Technology (OT) Environments

Inventorying cryptography in OT (industrial control systems, SCADA, IoT devices, etc.) deserves special mention. These environments prioritize safety and reliability above all. Many OT devices use proprietary protocols and have limited computing resources, so typical IT scanning tools may not even work on them. Additionally, OT engineers may be very cautious about any interference with the systems (for fear of causing a production line stoppage or a power grid glitch). For OT, a passive monitoring approach is usually preferred. For example, rather than trying to install software on a PLC (Programmable Logic Controller), one might monitor network communications of that PLC to identify if it’s using any encryption and of what kind. Also, working closely with the device vendors is key – often they are the only ones who can tell you what cryptography is inside their black box device. You may have to rely on vendor documentation or ask them to provide a statement of cryptographic usage. In some cases, sector regulations (like in utilities or healthcare) are pushing device manufacturers to supply this info to customers. Use those channels if available. Another tip: leverage any existing OT security monitoring tools (like specialized OT IDS/IPS) which might already flag old protocols or weak encryption on the OT network.

Resource and Time Constraints

Executing the inventory can be a long project, and organizations must allocate skilled personnel time for it. It’s not uncommon to require a dedicated team (or external consultants) working for many months. This can strain budgets and compete with other initiatives. The challenge is to keep momentum and not let the project stall. One approach is to break it into phases – perhaps tackle one division or a set of systems at a time, rather than boil the ocean all at once. Show incremental wins (e.g. “this quarter we fully inventoried our customer-facing applications”) to maintain support. Also be prepared for the 1–2 year timeline for large organizations that was mentioned earlier. Manage expectations with leadership that this is a marathon, not a sprint. It helps to tie the effort to concrete risk reductions (“by doing this, we prevented X and Y potential compliance issues and will save effort during the PQC migration later”).

Integrating Results & Avoiding Overwhelm

With multiple tools running, you will get a lot of data. Another practical challenge is integrating all these findings into one coherent picture (the CBOM). It can be overwhelming – different tools output in different formats, some findings might seem contradictory until analyzed, etc. It’s important to have people on the team who can write scripts or use data analytics to merge and deduplicate results. Also, define some scoping boundaries so you don’t drown in data: for example, you might initially focus on cryptography relevant to public-key algorithms (since those are most urgent for PQC concerns) and not immediately catalog every use of symmetric encryption if time is limited. You can prioritize within the inventory process itself. Just be sure to document any scope limits so that you can circle back later; e.g., “Phase 1 inventory will cover all uses of RSA, ECC, TLS, SSH, and digital signatures. Phase 2 will expand to symmetric and hashing.” A risk-based scoping like this can make the task more manageable, though ultimately you want everything documented.

Internal Communications

Finally, don’t underestimate the need to evangelize and communicate internally. Some teams might view the inventory as an audit or an implication that they’ve done something wrong (“why is security poking around?”). It’s helpful to frame it positively: as an organization-wide initiative to strengthen security and prepare for the future, not a blame game. When teams cooperate and provide info, acknowledge their help. If a team is defensive, stress that this is about identifying systemic issues, not finger-pointing. Having executive sponsorship (like a CISO mandate) can open doors, but on-the-ground cooperation comes from good relationships.

In dealing with all these practical aspects, a few best practices emerge:

  • Use agentless/passive discovery by default where possible to reduce risk.
  • Where agents/tools are needed, test in a sandbox first and work with system owners on scheduling.
  • Document and communicate the plan to stakeholders so they know what to expect (no surprises that some scanner is running).
  • Involve the compliance and risk teams – they can often help make the case to business units that this inventory is necessary (especially if there are regulatory deadlines, etc.).
  • Be flexible and ready with alternative approaches if you hit a wall (e.g., if you absolutely cannot scan a certain system, perhaps do an in-depth manual code review on it, or get vendor attestation as a stopgap).
  • Track progress methodically. Keep a checklist of systems and mark off when they’ve been inventoried. This will help ensure nothing slips through cracks in a large environment.

Project Plan

For all the reasons we discussed, implementing a comprehensive cryptographic inventory requires careful planning and coordination across stakeholders. I’ll try and put together a high-level plan to help you get you started. This plan assumes an environment with minimal pre-existing cryptography oversight and emphasizes thoroughness, including special steps for sensitive OT/IoT contexts where production impact must be avoided.

1. Initial Preparation – Asset Inventory & Stakeholder Alignment

Begin by establishing a foundation for the inventory effort. Compile an up-to-date asset inventory of all systems, applications, devices, and networks in the organization. This includes IT assets (servers, endpoints, applications), as well as OT/IoT devices, cloud services, and third-party solutions. Asset management might itself be in early stages, so expect to reconcile data from CMDBs, network scans, and facility records to get a full list. Parallel to asset gathering, perform stakeholder mapping: identify all parties relevant to cryptography usage. This typically involves security architects, IT operations, application development leads, compliance officers, and for OT – control system engineers or plant managers. It’s crucial to involve these stakeholders early to understand where cryptography is likely used (e.g. a developer can point out that a certain application uses OpenSSL, or an OT engineer knows a PLC uses an encrypted protocol). Engage them to set common goals and context, for example, explain that the inventory will help meet upcoming regulatory requirements (such as the EU’s mandate to complete a cryptographic asset inventory by 2026, which is considered a “no regret move” for security). This step ensures organizational buy-in. Deliverables from this phase: a confirmed asset list (IT and OT), a list of stakeholder contacts, and a charter document or kick-off meeting that defines the inventory project’s scope and objectives.

2. Scoping and Prioritization

With a clear view of assets, conduct a scoping exercise to determine the boundaries and focus areas of the cryptographic inventory. Not all assets carry equal risk or complexity, so categorize systems by factors such as: criticality of data handled, external exposure, compliance requirements, and known usage of cryptography (if any). For example, customer-facing web applications handling sensitive data and using TLS would be high priority, as would core internal systems using VPN or database encryption. Legacy systems or OT devices with proprietary protocols might be another category of concern. Define the inventory scope to include all five cryptography domains – data in transit, data at rest, applications/code, certificates/keys, and hardware devices – essentially mirroring the five pillars (External network, Internal network, IT assets, Databases, Code). At this stage, also prioritize which segments to tackle first. A sound approach is to start with “low-hanging fruit” that gives immediate visibility: for instance, external-facing assets (which can often be scanned passively with zero risk) or known enterprise IT systems where scanning tools can be readily run. Set a prioritization matrix that might rank, say, external network crypto discovery as Phase 1, internal critical servers as Phase 2, and so on. Additionally, consider any regulatory deadlines – e.g., if there’s a mandate requiring a report on cryptography in critical systems within 6 months, those systems get priority. The output of this step is a scoped plan that lists what will be inventoried (and what is out-of-scope, if anything) and in what order. It’s also helpful to define success criteria here (e.g., “100% of TLS endpoints inventoried,” “All business-critical apps reviewed for crypto algorithms,” etc.).

3. Tool Evaluation and Procurement

Based on the scope and priorities, identify which tools or vendor solutions will be needed for discovery. Likely, a combination of methods will be required (no single tool covers everything). Evaluate options in each category:

  • Network discovery: If passive monitoring of network traffic is needed (especially for OT safety), look at tools like CryptoNext’s passive probe or open-source alternatives (Zeek with custom scripts). If active network scanning is acceptable, consider Nmap or similar with scripts to probe supported ciphers on services.
  • Application and code analysis: For custom applications, you may need static code scanners or instrumentation tools. Evaluate open-source static analysis (e.g. CodeQL with cryptography queries) versus commercial offerings (e.g. SandboxAQ’s Application Analyzer or a SonarQube plugin) for finding crypto usage in code. For binary applications or third-party software, a binary analysis tool or software composition analysis tool that can detect crypto libraries might be useful.
  • Host-based scanning: Determine if an agent-based tool is feasible. In a greenfield case, if an EDR or endpoint management system is already planned (say, deploying Tanium or CrowdStrike in IT), leveraging that with a crypto discovery module (like InfoSec Global’s AgileSec via Tanium or an open script) could be efficient. Alternatively, an agentless approach via remote login scripts could be used for servers (though less comprehensive).
  • Cloud and virtual infrastructure: If the environment includes cloud services, consider cloud-native assessment – e.g., use cloud provider APIs to list where encryption is enabled or what KMS keys exist. Some cloud security posture tools can enumerate storage encryption settings, etc., which feeds into the inventory.
  • Specialty/OT analysis: For industrial or IoT devices, pure software scanners might not exist. Here, consider vendor-assisted analysis – reaching out to the device manufacturers for documentation on cryptography (or any CBOM they can provide). If budget permits, engaging a specialist firm to analyze firmware for crypto (using techniques like firmware binary scanning) may be prudent.
    Evaluate each tool for compatibility with the environment (e.g., ensure passive tools can handle the network speeds, static code tools support the programming languages in use, etc.) and for non-intrusiveness in critical areas. It’s wise to involve the stakeholders from Step 1 in this evaluation: e.g., have developers assess whether a static analysis tool will produce manageable output, or OT engineers vet that a network probe won’t disrupt operations. After evaluation, proceed to procure or arrange licenses for the chosen solutions (or plan the development of any in-house scripts if using open-source). The deliverable here is a tooling strategy – a mapping of each scope area to a specific discovery method/tool, along with any procurement timelines. For instance, the plan might say: use Tool X for external TLS scanning, Tool Y for endpoint certificate scanning, Tool Z for code analysis, etc.

4. Methodological Layering – Designing a Discovery Approach

Now, develop a detailed discovery methodology that layers the selected tools to cover all angles. This is essentially the playbook of how you will execute the scanning/inventory. It should address how data from different methods will complement each other. For example, you might decide:

  • Use passive network monitoring for an initial inventory of all TLS/SSL endpoints and their cipher suites (External & Internal Network pillar). This might quickly highlight outdated protocols or unknown services.
  • In parallel, run file system scans on servers and workstations for known key stores, certificate files, and config files (IT Assets pillar). This could be done with an agent script that looks for .pem, .p12, ssh/known_hosts, registry locations, etc.
  • Deploy static code scans on all in-house application repositories (Code pillar). This will identify crypto API usage (e.g. use of RSA, AES, hashing functions) in the software. Results can populate a CBOM of algorithms in code.
  • Query databases and storage systems for encryption settings (Database pillar). For each DB, determine if encryption at rest is enabled and what algorithm is used; for any file shares, see if they use EFS or other encryption. Some of this may be manual or via scripts.
  • Inventory all active certificates and cryptographic keys (Certificates/Key pillar). This could involve pulling from certificate management (if any), scanning Active Directory for certificates, or using tools like Venafi (if planned) to gather an inventory of machine identities.
    The idea is to layer passive, static, and dynamic techniques so that they reinforce one another. For instance, if static code analysis finds that Application A uses 2048-bit RSA, the network scan might confirm that Application A’s TLS certificate is RSA 2048 and the filesystem scan might find the private key file – together painting a full picture. Design the methodology so that overlap is intentional (to cross-verify findings) and gaps are assigned (areas one method can’t reach should be covered by another). Also plan for context enrichment: deciding how you will gather contextual info like data sensitivity or business criticality for each crypto asset. For example, if a scanner finds a TLS cert, you might tag it with which application or business unit it belongs to – this was highlighted in IBM’s approach of enriching findings with the value/criticality of related data. Having this in the methodology ensures the inventory isn’t just raw data but is meaningful for risk prioritization. Document the discovery approach clearly, as this will guide the implementation teams and also serve as a reference for auditors or management to understand how comprehensive coverage is achieved.

5. Pilot Testing and Validation (Especially for OT/IoT)

Before rolling out full-scale discovery, conduct pilot tests in controlled environments. This is doubly important for any sensitive segments like OT/IoT, where you must ensure scanners won’t disrupt operations. Select a few representative systems from each category for a pilot:

  • For IT systems, perhaps spin up a staging environment or select a non-critical subset of servers and run the host-based scans and static code analysis there first.
  • For network scanning, start with a passive monitoring period on a core switch that mirrors traffic, or run active scans on a small IP range during a maintenance window.
  • For OT, sandbox testing is crucial. If possible, use a lab replica of the OT environment or test on a single device that’s not controlling critical processes. Verify that passive monitoring on an OT network TAP truly has no packets sent (tools like COMPASS explicitly avoid injecting traffic). If doing any active analysis of firmware, ensure you have vendor guidance.
    During these pilots, validate the output and adjust tool configurations. You may find, for example, that the static code scan produces too many false positives – you’d then tweak the rules or scope to focus on true cryptographic calls (perhaps incorporate known patterns or use a different rule-set). Or the network probe might need protocol decoders tuned if it’s seeing a proprietary protocol. In OT pilots, pay attention to performance: confirm no latency or load is introduced (the CryptoNext probe notes its passive TAP design avoids latency, which is the kind of requirement to verify). Engage the system owners during validation – e.g., have an OT engineer confirm that after deploying a passive sensor, the device metrics remain normal. Additionally, use the pilot to test data capture boundaries – are you capturing sensitive data contents, and if so, ensure it’s either avoided or protected. (For instance, a dynamic app hook might intercept raw data; you may choose to log only algorithm metadata, not the plaintext/key itself, to avoid creating new sensitive data.) Once pilots are complete, review results with stakeholders to build confidence. Only then proceed to broad deployment. If any environment is too sensitive even for passive methods, plan an alternative (such as vendor assessment or skipping that segment with a documented risk acceptance until a safe method is found).

6. Full Deployment – Data Capture and Normalization

Execute the discovery across the entire in-scope environment in phases (per the earlier prioritization). As data starts coming in from various tools, establish a central data repository to compile the findings. It’s critical to normalize the data into a common schema so that outputs from different methods can be correlated. Here, leverage standardized formats if possible. For example, use the concept of a Cryptography Bill of Materials (CBOM) entry as a unit: each cryptographic asset (certificate, algorithm, key, etc.) found should be recorded with key attributes like location, type, algorithm, key length, usage context, etc. Standards like CycloneDX’s CBOM define fields for algorithms, primitives, modes, etc., which can guide your schema. If a static code scan tool outputs results in SARIF or a proprietary format, consider writing a conversion to an interim format (some organizations in NIST pilots normalized static scan results to SARIF to make aggregation easier). Similarly, network scan results (e.g. a list of hosts with TLS 1.2/RSA) should be transformed to the same schema entries as, say, a host scan result (which might list that host’s RSA key usage). Perform data enrichment during this stage: link each cryptographic finding to the asset inventory and business context. For instance, if an IP address from network scanning corresponds to a known server in CMDB, tag the record with that server’s name, owner, and criticality. Also categorize findings by pillar (network, app, etc.) or by risk (e.g., “uses RSA-2048”, “TLS1.3, OK”). This central repository might be a database or even a spreadsheet for smaller environments, but ideally use a security inventory tool or GRC platform to store and track it. Some enterprise tools allow import of custom inventory data – ServiceNow, for example, can house a cryptographic inventory module, and InfoSec Global notes integration with ServiceNow to analyze cryptographic material in one place. If no such platform exists, a unified spreadsheet or a simple database with a dashboard can suffice in the interim. The key is to avoid data siloing – all teams should be looking at one combined CBOM dataset, not separate lists from each tool. During data capture, maintain rigorous logs and backup of raw scan data as well (for audit trail and in case correlation needs to be revisited). By the end of this phase, you should have a draft comprehensive cryptographic asset inventory – essentially the CBOM – that lists all discovered cryptographic algorithms, instances, keys, certificates, protocols, and their attributes, aggregated from all discovery sources.

7. Analysis, Risk Ranking, and CBOM Documentation

Once the CBOM dataset is compiled, perform a thorough analysis and validation. This step translates the raw inventory into actionable intelligence. First, do a quality check: Are there obvious gaps (compare the inventory against the asset list – did every critical system yield some cryptography info, or do some show “no crypto found” which might indicate a miss)? If something expected is absent (e.g., you know an application uses encryption but the scan didn’t catch it), consider additional targeted discovery or manual inspection for that case. Conversely, investigate unexpected items – the inventory may reveal, for example, an unknown self-signed certificate on a device or an application using an old cipher that stakeholders were unaware of. Engage system owners to confirm the findings for accuracy. Next, perform risk ranking of the inventory. Each cryptographic item should be evaluated for strength and compliance: flag items that use deprecated or soon-to-be deprecated algorithms (RSA, 3DES, SHA-1, etc.), weak key lengths (e.g. 1024-bit keys), or non-compliant implementations (like an SSLv3 protocol or a hardcoded credential). Also factor in context – a weak cipher used in an internal low-stakes system might be lower priority than a moderately weak cipher on an external-facing system carrying sensitive data. The result should be a prioritized list of issues derived from the inventory, often with grouping by remediation type. For example, you might identify a set of internal applications all using a given legacy crypto library – that becomes a single remediation campaign. This risk-focused analysis echoes IBM’s “analyze” phase where the inventory is used to provide a prioritized action plan. Finally, document the CBOM formally. This might be a report or a living document. Many organizations create a Cryptographic Bill of Materials report that enumerates all cryptographic assets per system, often broken down by product or application. The CBOM should detail for each component: what algorithm/protocol it uses, key lengths, where it’s located (system or code reference), and the associated risk level (e.g., OK, needs upgrade, not approved by policy, etc.). This document or database is effectively the baseline for your cryptographic posture. It’s also something that can be shared with compliance auditors or regulators to demonstrate quantum-readiness progress (NIST and others recommend maintaining this cryptography inventory as part of migration planning). Ensure the CBOM is stored securely (since it contains sensitive info about your crypto, it could be a blueprint for attackers if misused – treat it as confidential).

8. Remediation Planning and Integration into Security Architecture

With a completed and analyzed cryptographic inventory, the next step is to operationalize it within the broader security program. Develop a remediation and migration roadmap based on the prioritized inventory. This means creating specific projects or tickets for each needed action: e.g., “Replace all RSA 2048 certs on external sites with quantum-safe alternates by Q4”, “Upgrade Library X from version using SHA-1 to version using SHA-256 in these 5 applications”, or “Coordinate with OT vendor to patch firmware enabling TLS 1.3”. Where possible, set deadlines aligned with regulatory timelines (many organizations are using 2025–2030 as target windows for post-quantum upgrades). Now integrate the inventory maintenance into existing processes:

  • Change Management: Update software development and deployment checklists to include cryptography checks. For instance, if an application team requests deploying a new app, require that they register the crypto it uses (or run the code scanner as part of CI) to update the CBOM. The inventory should become a living artifact that is updated with each change.
  • Security Monitoring: Feed relevant inventory data into security monitoring tools. If you have a SIEM or SOC, consider ingesting the inventory of certificates and algorithms so that, for example, the SOC can be alerted if an unauthorized algorithm is observed in network traffic (since you know exactly what is expected). Some integrations exist – e.g., InfoSec Global integrating with SIEM/SOAR like Azure Sentinel to automate crypto monitoring. You might set up dashboards that track metrics like “number of weak crypto instances remaining” as a measure of risk over time.
  • Compliance and Governance: Map the cryptographic inventory to compliance controls. For PCI DSS 4.0, for example, there’s a requirement (12.3.3) to maintain an inventory of cryptographic mechanisms – your CBOM fulfills that, so ensure policy documents reference it. For governance, establish an owner (perhaps the crypto center of excellence or risk team) responsible for keeping the inventory updated and reviewing it periodically (at least annually, or with every major tech update). This should be written into policies: e.g., “All systems must have their cryptography components cataloged in the organizational CBOM; any deviation requires approval.”
  • Continuous Auditing in OT: For OT environments, integration might mean scheduling read-only periodic scans (like leaving passive probes in place permanently for continuous audit) or working with vendors for regular attestation of device crypto. Since production impact is a concern, use the data from passive monitoring to ensure no new weak crypto creeps in over time, and plan for technology refresh cycles to incorporate crypto updates (the inventory will help spot when an OT device will become non-compliant so it can be upgraded in a maintenance outage).
    Finally, consider crypto agility and incident response integration. The inventory should feed into your incident response playbooks – if a new vulnerability hits (say a weakness in RSA or a particular TLS library), the CBOM lets you instantly identify which systems are affected. Likewise, as new quantum-resistant algorithms are approved, your inventory process should adapt to track those as “approved” and flag old algorithms as “to be replaced”. In summary, by embedding the cryptographic inventory into the fabric of IT/OT governance, you ensure it stays up-to-date and continues to drive proactive risk reduction. The inventory is not a one-time project but an ongoing program component, much like vulnerability management.

By following this project plan, a greenfield organization will have methodically prepared, discovered, and institutionalized a cryptographic inventory. This comprehensive CBOM becomes a cornerstone of the security architecture – providing visibility into cryptographic use across the enterprise and enabling measured progress toward cryptographic agility and compliance in both IT and OT realms. As noted by standards bodies, maintaining such an inventory is now considered foundational for crypto risk management and PQC preparedness. Through careful preparation, the use of layered discovery methods, cautious testing (especially in OT), and strong integration, the organization can achieve a reliable cryptographic inventory without disrupting business operations, thereby strengthening its overall security posture ahead of evolving threats.

Conclusion

Performing a comprehensive cryptographic inventory is a formidable task – arguably one of the most complex undertakings in preparing for the post-quantum era. It requires time, technology, and teamwork. However, it is also an indispensable foundation for any serious cryptographic modernization or quantum readiness program. By thoroughly identifying where and how cryptography is used in your organization, you gain the knowledge needed to manage those cryptographic assets wisely. The inventory (and resulting CBOM) becomes your map for navigating the transition to quantum-resistant cryptography and for shoring up any weaknesses in current crypto implementations.

The key lessons we highlighted include:

  • Don’t skip or skimp on the inventory – regulators and best practices demand it, and it underpins all other efforts. But also recognize it’s hard and plan accordingly (dedicate proper resources and time).
  • Avoid the pitfalls of purely manual inventories. Interviews and spreadsheets alone will be incomplete and potentially misleading. Use automated discovery tools and multiple data sources to get a true picture. Leverage human input as a supplement, not sole source.
  • Use a multi-pronged tool approach. There is no single silver bullet tool for crypto inventory – combine static analysis, dynamic/runtime monitoring, network observation, config scanning, etc., to cover all bases. Each method finds pieces that others miss.
  • Consolidate into a CBOM and keep it up to date. The inventory is only useful if organized and maintained as a living artifact. Embrace emerging standards for CBOM to integrate with SBOM and supply chain security processes.
  • Anticipate operational challenges. Work collaboratively with system owners, use non-intrusive techniques where possible, and possibly create test environments for scanning. The goal is to gather info without breaking things – a delicate balance in critical environments.

With a comprehensive cryptographic inventory in hand, an organization can move to the next phase: developing a risk-informed cryptographic transition strategy. This means using the inventory to prioritize which cryptographic systems to upgrade or replace, formulating a plan (e.g. what to tackle in the next 1-2 years vs. later), and integrating quantum-safe solutions in a way that aligns with business risk appetite. We will delve into that strategic planning in a subsequent article. (For instance, as a preview, some organizations choose a risk-driven approach – if a full inventory seems too slow, they might immediately remediate the most mission-critical systems with assumed vulnerabilities, while concurrently finishing the inventory for the rest. This hybrid strategy can yield quick wins but still aspires to complete the inventory in parallel. We explored such approaches in “A Risk-Driven Strategy for Quantum Readiness,” which addresses how to prioritize actions when ideal inventory data is lacking.)

Ultimately, comprehensive cryptographic inventory is the bedrock of crypto-agility. It equips you with knowledge to act decisively – whether it’s retiring an unsafe cipher, proving compliance, or migrating to post-quantum algorithms. By conquering the inventory challenge, you position your organization to face the coming quantum cryptography transition (and other cryptographic threats) with eyes wide open and tools in hand. It’s a demanding journey, but an absolutely worthwhile one for any security-conscious organization in the modern era.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven professional services firm dedicated to helping organizations unlock the transformative power of quantum technologies. Alongside leading its specialized service, Secure Quantum (SecureQuantum.com)—focused on quantum resilience and post-quantum cryptography—I also invest in cutting-edge quantum ventures through Quantum.Partners. Currently, I’m completing a PhD in Quantum Computing and authoring an upcoming book “Practical Quantum Resistance” (QuantumResistance.com) while regularly sharing news and insights on quantum computing and quantum security at PostQuantum.com. I’m primarily a cybersecurity and tech risk expert with more than three decades of experience, particularly in critical infrastructure cyber protection. That focus drew me into quantum computing in the early 2000s, and I’ve been captivated by its opportunities and risks ever since. So my experience in quantum tech stretches back decades, having previously founded Boston Photonics and PQ Defense where I engaged in quantum-related R&D well before the field’s mainstream emergence. Today, with quantum computing finally on the horizon, I’ve returned to a 100% focus on quantum technology and its associated risks—drawing on my quantum and AI background, decades of cybersecurity expertise, and experience overseeing major technology transformations—all to help organizations and nations safeguard themselves against quantum threats and capitalize on quantum-driven opportunities.
Share via
Copy link
Powered by Social Snap