Post-Quantum, PQC, Quantum Security

Rethinking CBOM

Why the quest to inventory “all cryptography” is doomed – and what to build instead

Introduction

In December 2018, a single expired certificate inside an Ericsson software module – an SGSN-MME node, to be precise – brought down mobile networks across eleven countries simultaneously. In the UK alone, 32 million O2 subscribers lost 4G and SMS service for the better part of a day. Japan’s SoftBank experienced a nationwide outage. O2 would later seek £100 million in compensation from Ericsson. The root cause was not a cyberattack, not a zero-day vulnerability, not a sophisticated adversary. It was an overlooked cryptographic object – a certificate – that nobody realized was about to expire, embedded in infrastructure that nobody had mapped with sufficient care.

If that were an isolated incident, it would be a cautionary anecdote. It is not.

In 2017, an expired SSL certificate on a network traffic inspection device at Equifax rendered that device unable to decrypt and inspect outbound traffic. Attackers exfiltrated the personal data of 147.9 million Americans – undetected, for 76 days. A subsequent investigation revealed that at least 324 SSL certificates had expired across the organization, including 79 for business-critical domains. The total cost exceeded $1.7 billion. The Congressional finding was damning: had Equifax implemented a certificate management process with defined roles and responsibilities, the SSL certificate would have been active when the intrusion began.

In February 2020, a forgotten authentication certificate renewal took Microsoft Teams offline for 20 million daily active users. The irony was not lost on anyone: Microsoft develops SCOM, a tool designed specifically to detect certificate expiration.

In April 2021, an expired wildcard TLS certificate cascaded across hundreds of Epic Games backend services. Root cause was identified in 12 minutes, the certificate renewed shortly thereafter – but full recovery took more than five hours because of cascading failures across dependent systems that no one had fully mapped.

I could go on. According to Keyfactor’s 2024 PKI and Digital Trust Report, organizations experience an average of three certificate-caused outages every 24 months, each taking roughly 2.6 hours to identify and 2.7 hours to remediate. At industry-average costs of $9,000 per minute, that translates to approximately $2.86 million per incident. And perhaps the most telling statistic of all: 53% of organizations still do not know exactly how many keys and certificates they have.

Every one of these failures shares the same root cause. Not negligence, exactly, though one can argue about that. The deeper problem is architectural invisibility: cryptography is the connective tissue of modern systems, but it is treated as a feature rather than an infrastructure layer, and almost nobody has an accurate, actionable map of where it lives, what it protects, and what breaks when it changes.

That is the origin story of the Cryptographic Bill of Materials, or CBOM. Not “we need another compliance artifact,” but “we need X-ray vision.” The promise is seductively simple: if we could inventory our cryptography – algorithms, protocols, keys, certificates, libraries, and how they connect – we could finally answer the question that becomes existential the moment post-quantum risk enters the room: where is our cryptography, and what is it actually protecting?

The problem is that CBOM, as commonly understood, has a beautiful, impossible first draft baked into the idea. Inventory all cryptography. Everything. Everywhere. Fully enumerated. Continuously updated. It sounds like responsible governance – the same managerial reflex that made SBOM inevitable. And in the real world, it is the fastest way to stall a program before it produces a single decision you can execute.

In this article, I want to argue for a different framing – one that I believe is not only more practical, but more honest about how cryptography actually exists in enterprise environments. I will try to explain why the completeness model fails, what policy guidance actually says (as opposed to what people assume it says), and how to build what I call a Minimum Viable CBOM: a layered, architecture-first approach that drives decisions rather than decorating slide decks.

Let me explain.

CBOM in 60 seconds

The simplest way to explain CBOM is still the best. If SBOM is the ingredients list for software, CBOM is the ingredients list for the security assumptions that software depends on. Where SBOM tracks components and dependencies, CBOM tracks cryptographic assets – algorithms, protocols, certificates, keys, and related material – and the relationships that turn “implemented somewhere” into “actually used here.”

This is not happening in a vacuum. OWASP CycloneDX has been a major force in formalizing CBOM as part of a broader “xBOM” ecosystem. CBOM was formally introduced in CycloneDX v1.6, released in April 2024, developed originally by IBM Research and integrated upstream. CycloneDX itself was ratified as ECMA-424 at the 127th Ecma General Assembly in Geneva in June 2024 – which matters if you want CBOM to be more than a boutique format. CycloneDX v1.7, released in October 2025, expanded CBOM support further with a Cryptography Registry that provides standardized naming patterns for algorithm families, addressing the inconsistencies in how different tools and organizations classify cryptographic primitives.

The CycloneDX CBOM guidance is blunt about why this needs to be machine-readable: cryptography is buried deep inside components, and the only scalable way to manage it is to represent it in a structured object model that tools can reason over. In practice, that is the dividing line between “a slide deck about crypto” and “an engineering artifact that can drive migration, audits, procurement requirements, and policy enforcement.

And here is the trap door: the moment you say “inventory,” people hear “complete.” CBOM dies when it becomes a purity test.

Why “inventory all cryptography” sounds right – and fails anyway

Enterprises love countable things. Servers. Applications. Endpoints. Licenses. Controls. Cryptography feels like it should be countable too: list the algorithms, enumerate the certificates, count the keys, declare victory.

But cryptography is not just a component you install. In modern architectures, cryptography is frequently a runtime outcome – shaped by configuration, negotiation, platform defaults, intermediary termination points, and vendor constraints. That is not rhetoric; it is explicitly reflected in the CBOM standardization thinking.

The CycloneDX CBOM guide makes a point that quietly detonates any “scan the code and we’re done” strategy. Developers often do not interact directly with cryptographic primitives like RSA or ECDH. They consume cryptography via protocols – TLS, IPsec – and via certificates, keys, and tokens. And as crypto-agility becomes more common, algorithm choice becomes something negotiated per session or configured at deployment rather than hardcoded into source.

That single observation has enormous operational consequences.

If the truth of your crypto posture is negotiated at runtime, then a “complete CBOM” is not just a software artifact. It is a hybrid of code inventory, infrastructure inventory, policy inventory, and vendor transparency exercise. It is also time-dependent: the truth changes when a TLS policy changes, when a load balancer gets upgraded, when a service mesh rolls out stricter defaults, or when a SaaS provider updates their cryptographic stack behind your back.

Then CycloneDX introduces a second complication that sounds semantic until you try to act on it: implementation is not usage. A cryptographic library can implement an algorithm that nothing in your application ever calls. A platform can ship cryptographic capabilities that your deployment never enables. So the CBOM data model differentiates between a component that provides cryptographic capabilities and one that uses them. The distinction evolved from IBM’s original CBOM specification – which used “implements” and “uses” – to the “provides” and “dependsOn/uses” relationships in CycloneDX v1.6. The standard explicitly warns that a component implementing another component does not imply that the implementation is in use.

In other words: a CBOM that cannot tell you what is actually being exercised in production will actively mislead you.

And we have not even touched the black-box layer: appliances, telecom network elements, embedded devices, managed security platforms, and the “crypto somewhere inside the vendor blob” reality that permeates every serious enterprise. This is where completeness fantasies go to die, because visibility is not just technically hard – it is often contractually constrained. A typical Windows environment alone may contain between 80,000 and 500,000 certificates. Manual tracking at that scale is infeasible. Automated tooling, as we shall see, remains immature.

So when CBOM is framed as “cover all cryptography,” programs tend to collapse into two familiar failure modes:

  • The museum CBOM: Gorgeous. Detailed. Static. Already wrong by the time it is published.
  • The treadmill CBOM: Endless inventory work that never graduates into prioritization, migration planning, or policy enforcement.

Both feel like progress until the moment you need the CBOM to drive a real decision – like which systems must migrate first for post-quantum cryptography, or where you can safely disable a legacy cipher suite without breaking revenue.

Policy already knows this

If you want a hard reset on completeness fantasies, read what governments are actually requiring for post-quantum migration. The rhetoric sounds sweeping. The operational guidance is more pragmatic than many enterprises are willing to be with themselves.

OMB Memo M-23-02, issued November 18, 2022, is explicit: the post-quantum transition begins with a prioritized inventory of cryptographic systems, focusing on High Value Assets (HVAs) and high-impact systems. It defines “cryptographic system” in functional terms – key creation and exchange, encrypted connections, digital signatures – and sets a cadence: agencies must submit a prioritized inventory by May 4, 2023, and annually thereafter until 2035, or as directed by superseding guidance. The requirement encompasses systems that are agency HVAs or any system an agency determines is likely to be particularly vulnerable to CRQC-based attacks.

Read that carefully. Prioritized. Scoped. Linked to impact. Annual cadence. Not “enumerate everything by Q2.

Then the 2024 White House Report on Post-Quantum Cryptography adds the nuance that practitioners feel in their bones. It acknowledges that automated cryptographic inventory solutions are rapidly emerging and can expedite inventory work – but it warns that automated inventories may not identify all instances of public-key cryptography because automated tools may not have visibility over an entire agency network or be compatible with all digital systems. It states that each agency is required to perform an annual manual inventory – completing this manual inventory entails researching each piece of hardware and software to discover the type of cryptography used. It calls comprehensive inventory maintenance an iterative and ongoing process requiring sustained investment as systems evolve through patching, updating, and lifecycle refresh.

The report also frames the urgency in language that is unusually concrete for a policy document: the threat of “harvest-now-decrypt-later” attacks means the migration to PQC must start before a CRQC is known to be operational.

CISA’s operational guidance reinforces this realism. A joint factsheet from CISA, NSA, and NIST in August 2023 includes a dedicated section titled “Prepare a Cryptographic Inventory,” instructing organizations to use cryptographic discovery tools to identify quantum-vulnerable algorithms in network protocols, on end user systems and servers, and in cryptographic code or dependencies in the CI/CD pipeline. But it also warns that discovery tools may not be able to identify embedded cryptography used internally within products. CISA’s own Automated Cryptographic Discovery and Inventory (ACDI) strategy from September 2024 acknowledges that only three of the nine M-23-02 data items can be collected via automation – the rest require manual collection.

The PQCC migration roadmap, published in May 2025 by the Post-Quantum Cryptography Coalition – a group of 125+ contributors led by MITRE and co-founded by IBM Quantum, Microsoft, PQShield, SandboxAQ, and the University of Waterloo – pushes the same operational realism from a “how to execute” perspective. It explicitly tells organizations to prioritize assets in their inventory based on sensitivity and lifespan. It also does something that most enterprise programs avoid because it feels uncomfortable: it tells you to document what you do not know. The FS-ISAC PQC Working Group’s Infrastructure Inventory Technical Paper, referenced by the PQCC ecosystem, is even more direct: it describes the need to understand potential blind spots – offline keys, keys in file structures inaccessible to network scanners, keys of unknown format.

The cost dimension is worth noting. The White House Office of the National Cyber Director projected $7.1 billion (in 2024 dollars) for government-wide PQC migration between 2025 and 2035. Cryptographic inventory is the foundation on which that investment is planned. If the inventory is wrong, the spending is wrong.

Read all of this as an enterprise leader and you should feel a strange kind of relief. Because it is the most authoritative version of a thesis that many CBOM programs desperately need permission to adopt: inventory is not a one-off deliverable, and automation will not find everything. The goal is to build an improving capability, not to reach perfection and stop.

The EU is moving in the same direction

This pragmatic realism is not limited to the United States. The EU’s Coordinated Implementation Roadmap for the transition to post-quantum cryptography, released June 23, 2025, sets three milestones: all Member States begin transitioning by end of 2026 (including establishing cryptographic inventories), high-risk systems secured by end of 2030, and full transition by end of 2035.

Critically, the roadmap explicitly states that organizations should maintain a cryptographic inventory using standardized formats, such as the Cryptographic Bill of Materials (CBOM) – a direct endorsement of the concept, and one of the first policy-level references to CBOM by name.

The EU framework does not exist in isolation. NIS2, effective since October 2024, requires cryptographic policies under its security measures and mandates that those measures take into account the state of the art – effectively requiring crypto-agility. DORA, in force since January 2025, requires robust cryptographic controls for financial entities. PCI DSS v4.0 explicitly requires organizations to maintain an up-to-date, documented inventory of all cryptographic ciphers and protocols in use and develop migration plans for cryptographic obsolescence. And Europol, in January 2026, published a financial services PQC migration framework using a Quantum Risk Score and Migration Time Score for prioritization – another signal that the policy world has already accepted that triage, not totality, is the operating model.

Meanwhile, NSA’s CNSA 2.0 suite, announced in September 2022, establishes the quantum-resistant algorithm suite for National Security Systems – with aggressive transition timelines: software and firmware signing must support and prefer CNSA 2.0 algorithms by 2025, and use them exclusively by 2030. All NSS transition must be complete by 2035. While CNSA 2.0 does not contain an explicit “cryptographic inventory” section, compliance with it is impossible without knowing what cryptography is deployed – a fact that makes the inventory question unavoidable for any organization operating in the national security supply chain.

The mental model that unsticks everything

Here is the model that tends to flip CBOM from “inventory theater” into something operational.

Trying to build a complete CBOM is like trying to inventory every pipe, valve, and gasket in a city’s water system. You can do it – on paper – if you are willing to spend years surveying basements, alleys, undocumented renovations, and privately owned plumbing that connects to public infrastructure. Even then, you will miss the emergency bypasses added during outages, the “temporary” valves installed during construction, and the unlabeled hacks that kept someone’s critical service alive at 3 a.m.

A useful CBOM is a hydraulic map. It tells you where the mains run, where the pressure points live, which neighborhoods depend on which reservoirs, and which valves you must not touch unless you are ready to shut down a hospital.

In crypto terms, the first version of your CBOM should be a map of trust:

  • Trust anchors: root CAs, internal PKI, cloud KMS/HSM platforms, signing infrastructures, identity providers, firmware roots of trust.
  • Termination points: where TLS or IPsec ends, where traffic is decrypted for inspection, where it is re-encrypted, where “secure tunnels” are actually being terminated by an intermediary.
  • Interfaces and flows: what crosses which boundary, what protects it, and which team controls the dial.
  • Ownership and change control: who can rotate keys, update libraries, change cipher policy, approve outages, negotiate vendor change windows.

This is architecture, not code scanning. Code scanning is evidence. Architecture is scope and prioritization.

My Open RAN CBOM case study demonstrates exactly why architecture-first CBOM scales when “inventory everything” does not. Rather than treating the RAN as a monolithic blob of cryptography, we decomposed it by components – O-RU, O-DU, O-CU, RIC, SMO, O-Cloud – and by planes and interfaces, then enumerated cryptographic mechanisms in context. The CBOM becomes navigable because the architecture is navigable. ATIS published a complementary report in 2025, “Cryptographic Bill of Materials (CBOM) for Telecom: Enabling Quantum-Safe Transition and Crypto-Agility in 5G Networks,” further validating this architecture-centric approach for the telecom sector.

Enterprise CBOM needs the same move. Not because telecom is special. Because every enterprise is functionally an ecosystem of components and interfaces.

How this actually works in the field

A pragmatic CBOM is not “built” like a document. It is triangulated – like an investigation. You take three imperfect pictures of reality – people, paper, and packets – and overlay them until the intersections become actionable truth.

People: interview for boundaries, not algorithms

If you ask an application team what algorithms they use, you will usually get one of three things: a guess, a library name, or a confident answer that is only true in one deployment environment. That is not incompetence. It is the reality of layered platforms and negotiated protocols.

If you ask boundary questions, you get architecture. You uncover where cryptographic decisions are actually being made:

  • Where does TLS terminate – and why there? Is it the ingress controller, the load balancer, the service mesh, the gateway appliance?
  • Which integrations are mutual TLS, and who issues and rotates the certificates?
  • Where are private keys stored – and can they be exported? Are they in an HSM, a cloud KMS, a file system, an “encrypted” secret store, or a vendor-managed vault you cannot inspect?
  • What breaks if we rotate the issuing CA? Which systems pin certificates or embed public keys?

These questions sound operational. They are actually CBOM accelerators. They identify cryptographic choke points – the exact places where a post-quantum migration, or even a routine crypto policy change, will either succeed smoothly or detonate across dependencies.

This aligns with a broader truth about crypto-agility: swapping algorithms is rarely a drop-in operation. It is constrained by interoperability, performance, ecosystem dependencies, and bugs. CBOM interviews that ignore interoperability produce inventories that look complete but are not migration-ready.

Paper: your organization already has a CBOM skeleton

It just does not call it that. Most enterprises already maintain partial maps that can seed a CBOM index — they are just scattered across teams and tools:

CMDB and infrastructure inventories tell you what exists (sometimes). AMDB and application portfolio data tell you what the business thinks exists (often differently). BIAs and critical business service maps tell you what actually matters when things break. Network diagrams and segmentation models tell you where trust boundaries were intended to be. PKI documentation and certificate inventories tell you what your identity spine looks like. Vendor solution designs tell you what is inside the black box – or at least what the vendor is willing to describe.

None of these artifacts are “CBOM.” Together, they answer the foundational CBOM questions: what exists, who owns it, and what does it touch? The PQCC roadmap explicitly encourages leveraging information already available, then deciding whether additional discovery is necessary.

The pragmatic move is to treat these sources as the first draft of your CBOM index. Then use discovery tooling to validate, enrich, and expose contradictions – because contradictions between what the architecture diagram says and what the network is actually negotiating are where the most dangerous gaps live.

Packets (and binaries, and configs): instrument surgically

NIST’s NCCoE Migration to Post-Quantum Cryptography project – a collaboration of over 47 organizations including AWS, Cisco, Google, IBM, JPMorgan Chase, Microsoft, NSA, and CISA – frames cryptographic discovery as a structured effort with a functional test plan, use-case scenarios, and a reference architecture for integrating discovery tools. NIST SP 1800-38B, the Quantum Readiness: Cryptographic Discovery volume, explicitly identifies CBOMs as having the potential to enable organizations to manage and report usage of cryptography in a standardized way.

But the White House report is the counterweight that keeps you honest: automated inventories will miss instances, which is why manual inventory is still required and the overall process must be iterative. The NCCoE testing itself demonstrated that no single product finds all instances of vulnerable cryptography – a multi-tool approach is required.

So the pragmatic posture looks like this: Use automated discovery where you have coverage and control. Treat “no findings” as unknown, not safe. Concentrate effort where findings alter the migration plan: HVAs, internet-facing boundaries, partner links, identity roots, and long-lived sensitive data flows.

The Minimum Viable CBOM

This is the make-or-break moment for most programs. If you start by demanding full cryptographic detail everywhere, you stall. If you start with a minimal model that supports prioritization and execution, you create momentum – and then you can deepen iteratively.

SBOM learned this lesson the hard way. The NTIA “Minimum Elements for a Software Bill of Materials” document, published in July 2021 in response to Executive Order 14028, established a baseline of seven required data fields, machine-readability requirements, and operational practices – explicitly acknowledging that transparency initiatives only scale when there is an adoptable minimum. CISA updated this with the “2025 Minimum Elements for SBOM” in August 2025. CBOM needs the same philosophy: a Minimum Viable CBOM explicitly designed to drive decisions, not documentation.

CycloneDX’s CBOM design supports this approach. It is an abstraction that lets you represent cryptographic assets, their properties, and, critically, their dependencies, including the difference between what is provided and what is used. That is exactly what a Minimum Viable CBOM should capture: enough structure to reason about change impact, not an encyclopedic catalogue of everything.

A practical Minimum Viable CBOM in large enterprises tends to have four layers.

Layer A – The CBOM Index

System-level, ownership-level truth.

This is where you stop thinking like a cryptographer and start thinking like an architect and a program owner.

For each critical system or service, capture what lets you plan work and assign accountability: business owner, technical owner/operator, environment scope (on-premises, cloud, SaaS, hybrid), criticality rating from BIA, data types and protection lifetime (especially where “record-now-decrypt-later” is a meaningful threat), and trust dependencies – PKI, identity providers, KMS/HSM, gateways.

This is the layer that turns CBOM from “crypto trivia” into a portfolio you can govern. It aligns with how OMB frames inventories: prioritized, scoped, linked to impact and time-to-live considerations.

Layer B – The Crypto Surface Map

Interfaces and trust boundaries.

This is the cartography layer – the part that makes the CBOM navigable.

For each system, map the interfaces where cryptography is doing real work: API ingress, partner links, service-to-service calls, admin access paths, data replication flows. Capture where encryption terminates, where it is inspected, where it is re-encrypted, and who controls those control points.

This is also where you separate “the application uses TLS” from the more decisive truth: “TLS terminates at the edge, is decrypted for inspection by a network appliance, then re-encrypted, and the cipher policy is owned by the platform team, not the application team.”

It matches the PQCC roadmap’s explicit instruction to document architectural designs, protocols, and interfaces for important assets.

Layer C – Evidence

Cryptographic detail where you intend to act.

Only now do you go deep: public-key mechanisms in use (key establishment and digital signatures are the PQC pressure points), certificate properties (algorithm, key size, expiry, issuers), crypto libraries and versions, key management lifecycle practices (HSM/KMS, rotation, escrow, backup).

And you do it with the “provides versus uses” discipline in mind, because implementation without usage context produces both false alarms and false confidence.

Layer D – Known Unknowns

The honesty that prevents false confidence.

This is not a nice-to-have. It is your credibility layer.

Document blind spots explicitly: vendor opacity, unmanaged endpoints, offline keys, inaccessible key stores, unknown formats, systems where you simply do not have the contractual right or technical capability to inspect cryptographic internals.

A CBOM that admits uncertainty can be improved. A CBOM that pretends certainty becomes a liability – because leaders will treat it as complete during planning, procurement, and risk acceptance. This is also the layer that keeps leadership honest. Without it, the CBOM becomes a promise the program cannot keep.

What to map first

Once you accept that “all cryptography” is a mirage, prioritization becomes the discipline that makes CBOM practical.

The PQCC roadmap’s prioritization logic is elegantly simple: prioritize assets based on sensitivity and lifespan. That is the post-quantum translation of a broader resilience truth – focus first on what would hurt most and what needs to remain protected the longest.

OMB’s framing points in the same direction for government systems, emphasizing prioritized inventories tied to HVAs and high-impact systems and driving an annual reporting cadence toward a 2035 horizon. NSA’s CNSA 2.0 timelines add concrete urgency: software and firmware signing must support CNSA 2.0 by 2025 and use it exclusively by 2030.

In enterprise practice, this consistently pulls the same categories toward the top of the queue:

  • Identity and trust anchors – PKI, signing, authentication – because they are multiplicative dependencies. If a root CA is compromised or must be rotated, the blast radius is not one system; it is every system that trusts that root.
  • Externally exposed encrypted interfaces – customer-facing services, partner links – because the adversary gets to choose the time and place of interaction, and because these are the interfaces most likely to be subject to regulatory scrutiny.
  • Long-retention sensitive data stores and flows – because record-now-decrypt-later is fundamentally about time horizons, not headlines. An academic paper published in 2025 in MDPI’s Computers journal estimated that large enterprises may need 12 to 15 years or more for full PQC migration, with cryptographic discovery alone taking two to three years across global infrastructure. If data needs to remain confidential for 20 years and migration takes 12, you are already late.
  • Legacy islands – where change is slow, vendor-dependent, and politically fraught. CISA’s “Post-Quantum Considerations for Operational Technology” document from October 2024 warns that OT systems may be among the last remaining platforms to achieve post-quantum cryptographic standards due to long software patching cycles, hardware replacement times, and strict operational procedures.

This is where CBOM stops being “inventory” and becomes a roadmap generator.

Keeping CBOM alive

A CBOM you cannot maintain becomes a museum piece. In environments shaped by cloud migrations, M&A, and continuous delivery, staleness is not a slow decay – it is a cliff. The White House PQC report does not sugarcoat this: inventories must be updated as hardware and software undergo patching, updating, and lifecycle refresh, and the process will require sustained investment.

CycloneDX’s CBOM guidance hints at what “maintainable” looks like: CBOM can be native to a broader BOM model, modular, and structured so different teams can own different parts with differing authorization requirements. In enterprise terms, that means you stop trying to centralize everything into one spreadsheet and instead build a federated model where authoritative sources feed the CBOM.

The “living CBOM” pattern tends to be three different workflows stitched together:

  • For software you build: CBOM generation and enrichment becomes part of release governance – and eventually CI/CD – mirroring SBOM practices.
  • For platforms you operate: certificate inventories, TLS policies, service mesh configurations, and KMS/HSM inventories become authoritative CBOM feeds.
  • For products you buy: vendor engagement becomes part of CBOM reality. Procurement and vendor risk management begin demanding crypto transparency – sometimes CBOM directly, sometimes vendor roadmaps, sometimes attestations – because you cannot migrate what you cannot influence.

The litmus test stays simple: can you use the CBOM to make a change safely? If you cannot answer “what breaks if we rotate X” or “where are we still negotiating Y,” then the CBOM is not yet an engineering asset. It is still a report.

The tooling landscape: promise and reality

It is worth being honest about the current state of CBOM tooling, because the gap between marketing claims and operational reality is significant.

The most mature open-source effort is IBM’s CBOMkit, open-sourced through the Linux Foundation. It comprises several components: CBOMkit-Hyperion (a SonarQube plugin for source code scanning, currently supporting Java and Python), CBOMkit-Theia (container and directory scanning), CBOMkit-Coeus (a web-based CBOM viewer), CBOMkit-Themis (a compliance engine with built-in quantum-safe checks), and CBOMkit-Action (a GitHub Action for CI/CD integration). IBM has validated the approach internally, using it on live business applications handling sensitive data. But the documentation is candid about limitations: CBOMkit scans source code without building the repository first, which potentially reduces completeness and accuracy.

Commercial tools are proliferating. IBM’s Quantum Safe Suite (Explorer, Advisor, Remediator, Guardium Quantum Safe), Keyfactor’s AgileSec platform, SandboxAQ’s AQtive Guard, CryptoNext’s COMPASS, Quantum Xchange’s CipherInsights, and others are entering the market. I maintain a comparison of cryptographic inventory vendors for those who want to evaluate options.

But there is an important caveat that cuts across the entire tooling landscape, and it comes from the most direct practitioner critique I have seen. Keyfactor’s analysis of CBOM is blunt: a CBOM reflects built-in capabilities but does not capture how software is configured or used in a specific organization. It will not indicate whether an organization is using SHA-1 or SHA-256, even if both algorithms are supported by the library. Keyfactor explicitly concludes: a CBOM is necessary but not sufficient.

That assessment is consistent with what the NIST NCCoE testing demonstrated, what the White House report warns, and what anyone who has tried to build a CBOM in a real enterprise already knows. No single tool finds everything. A multi-tool, multi-method, iteratively improving approach – people, paper, and packets – is not a workaround. It is the architecture of a mature CBOM program.

Where this is heading

Here is the wager I would make if we zoom out beyond today’s tooling: CBOM evolves the way infrastructure documentation evolved.

First, we drew static diagrams. Then we generated diagrams from CMDBs. Then we built service graphs and runtime inventories and called it observability. CBOM will follow the same arc – not because standards bodies want prettier JSON, but because the economics of PQC migration, certificate lifecycle failures, and supply chain dependency management will force cryptography out of the shadows.

NIST’s “cryptographic discovery” framing in the NCCoE project is already shaped like an observability problem: repeatable discovery methods, tool evaluation, reference architectures, baseline capabilities. CycloneDX v1.7’s Cryptography Registry – with standardized algorithm family naming, cross-referencing to NIST and ISO standards, and machine-readable classification – is the kind of infrastructure you build when you expect CBOM to become a live, queryable system, not a static document.

The White House report’s language – iterative, ongoing, sustained investment – reads less like a compliance requirement and more like a description of an operational capability that never stops. Once you have a baseline CBOM, the next natural step is making it queryable and continuously enriched: handshake telemetry, certificate scans, code analysis, dependency graphs, key management events, and policy-as-code generating “crypto posture” views as routine as vulnerability dashboards.

This is not a promise that CBOM becomes easy. It is a prediction that CBOM becomes unavoidable – and therefore engineered into how we run systems, rather than bolted on as a periodic compliance ritual.

The real ambition

The CBOM ideal – catalog every cryptographic asset everywhere – sounds like responsible governance. In practice, it is often the fastest route to stall a program and produce an artifact that cannot drive a migration plan.

The pragmatic alternative is not settling for ignorance. It is changing the objective:

From completeness to decision usefulness. From crypto inventory to trust mapping. From a one-time deliverable to an iterative capability. From “find everything” to “find what matters first – and document what we don’t know.”

That is not lowering the bar. It is aligning CBOM with the way cryptography actually exists in modern systems: negotiated, layered, embedded, vendor-mediated, and always changing. It is consistent with how post-quantum inventories are framed in policy – prioritized, recurring – and in practical migration roadmaps: document interfaces and blind spots, prioritize by sensitivity and lifespan.

A pragmatic CBOM is not less ambitious. It is more real.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap