Quantum Commercialization

Quantum Computing Due-Diligence: A Field Guide to Evaluating Startups, Technologies, and Claims

Introduction

I still remember the first time I sat across from a quantum computing startup founder, listening to a pitch full of dazzling promises. As a tech investor with quantum computing background, I was both excited and skeptical. Quantum technology is genuinely disruptive – its potential to revolutionize computing, cryptography, and materials is often described as the “second quantum revolution.” But that excitement also breeds confusion and hype. There’s no established playbook for quantum investors; funding decisions become trickier than ever because we’re in largely uncharted territory. How do we tell apart a world-changing breakthrough from a well-packaged baloney?

Over the years, I’ve developed a personal toolkit – a field guide – for evaluating quantum startups and their claims. The goal is to neither be swept up by the mystique nor dismiss genuine progress. This balance is crucial: the quantum field is rife with hype, now more than ever, and companies sometimes exaggerate to secure funding. Yet real breakthroughs do happen – just on a longer timeline and with more caveats than typical tech ventures.

So how can we “develop a quantum BS detector” and stay grounded in reality? It starts with understanding the basic terms (often misused), then applying a structured diligence framework.

Demystifying Common Quantum Metrics (and Their Misuse)

One of the first things I do in a quantum pitch meeting is clarify the jargon. Founders will throw around terms like “coherence time of 100 microseconds,” “99.99% fidelity,” “logical qubits,” “QEC,” or “quantum advantage.” These metrics can sound impressive, but they’re often misunderstood or conveniently presented without context. Let’s decode a few of the key metrics and how they can be misused:

Coherence Time

This is essentially how long a qubit remains in a useful quantum state before it decoheres (loses its quantum information). Coherence time is often denoted by T₁ or T₂ (relaxation or dephasing times). In simple terms, it’s “how long a qubit can maintain one of the critical quantum properties like superposition and entanglement” before environmental noise disrupts it.

Longer coherence means you can run deeper circuits or more operations. However, context matters: a quoted coherence time might be for an idle qubit at perfect isolation – actual useful coherence during computation can be much lower. Different technologies have vastly different coherence scales (e.g. superconducting qubits: microseconds; trapped ions: milliseconds to minutes).

Misuse: A startup might boast “long coherence times” without noting caveats. For instance, if they measured a T₂ of 100 μs in a lab, that’s good, but if gate operations take 10 μs each, you only get ~10 gates before decoherence – not so impressive. Always ask under what conditions coherence was measured, and whether it holds when the qubit is actively computing or part of a larger system.

Fidelity

In quantum computing, fidelity usually refers to the accuracy of operations or the quality of qubit states. Gate fidelity is especially common – it’s 100% minus the error rate of a given quantum gate. For example, 99% fidelity means a 1% chance of error each time that gate runs. High fidelity is crucial: errors accumulate quickly on many qubits.

Misuse: Companies often cite their best-case fidelities. “Our single-qubit gates have 99.9% fidelity,” they’ll say. But was that the best qubit on the chip or an average? Was it measured in isolation or amid a full algorithm? Always clarify whether a fidelity number is single-qubit or two-qubit, and whether it’s peak or average performance across the device. Two-qubit (entangling) gate fidelity is usually much lower than single-qubit, and often the real bottleneck. A true picture of quality might be a range (e.g. 95-98% 2-qubit fidelity across 20 qubits).

If a claim only highlights one very high number, dig deeper. As an investor, I request error rates for all gate types and qubits, and I look for standard benchmarks (like randomized benchmarking or quantum volume) that aggregate fidelity in a meaningful way.

Logical Qubits

This one is huge. A physical qubit is the raw qubit (a superconducting circuit, an ion, a photon, etc.) which is typically noisy. A logical qubit is an error-corrected qubit, composed of many physical qubits with Quantum Error Correction (QEC) so that it behaves as a more stable, high-fidelity qubit. Think of a logical qubit as a cluster of physical qubits with redundancy and active error checking, so that the logical state can survive errors. Having even a few stable logical qubits is a major milestone in this field.

Misuse: Startups often omit the “logical” qualifier on purpose. They’ll say “we have 100 qubits” knowing that uninformed listeners equate that with computational power. In reality, 100 noisy physical qubits with no error correction might be practically equivalent to 0 logical qubits. A claim involving even a handful of logical qubits would be astounding (and should come with published data). So if someone boasts a big qubit count, ask pointedly: “Are those physical or logical qubits?” If they dodge or seem confused, that’s a red flag. Many over-hyped announcements carefully avoid this distinction.

My rule of thumb: if I only hear “qubits” with no qualifier, I assume physical and immediately ask about error rates and QEC. Remember that a logical qubit might require dozens or millions of physical qubits, depending on error rates. So raw qubit count alone means little without context.

QEC (Quantum Error Correction)

This refers to the suite of techniques used to detect and correct errors in quantum systems. Quantum error correction is necessary to achieve long algorithms and fault-tolerant quantum computing. In practice, QEC involves encoding one qubit of information into multiple physical qubits (e.g. using parity checks and syndromes) so that errors can be spotted and fixed on the fly. QEC is extraordinarily challenging: it’s not just theory; you need qubits with error rates below a certain threshold and the ability to do many additional operations for error detection.

Misuse: QEC is a buzzword now. A startup might claim “we use error correction” when they really mean a rudimentary form of error mitigation or just theoretical codes on paper. To date, no one has a fully error-corrected quantum computer running arbitrary algorithms indefinitely – we’ve only seen prototypes. For example, Google demonstrated a logical qubit with a lower error rate than any of its 49 constituent physical qubits (a big achievement), but even that logical qubit wasn’t stable forever, just longer than a physical one.

So if a company says “we have quantum error correction working,” ask what code and what the measured logical error rate is, and whether it’s below the physical error rate (the hallmark of true error correction). Usually, you’ll hear that they’ve started implementing QEC (which is fine), but be wary of phrases like “solved error correction” – that would be Nobel Prize-worthy and should come with solid evidence.

In my diligence, I often request any peer-reviewed publications on their QEC experiments or have an independent expert assess if their approach to QEC is realistic.

Quantum Advantage

This term is often used interchangeably with “quantum supremacy,” but they have shades of meaning. Quantum advantage generally means a quantum computer performing a useful task better or faster than a classical computer can. Quantum supremacy was defined as a quantum computer doing anything that a classical supercomputer practically cannot, even if the task is useless (Google’s famous 2019 experiment achieved supremacy for a contrived random circuit sampling problem). The key about advantage is it implies practical value.

Misuse: The timeline of quantum advantage is a magnet for hype. As of now, all credible demonstrations of quantum advantage have been on academic or very narrow problems – exciting proofs of progress, but not much practical advantage for commercially-relevant problems. For instance, a quantum machine might sample random numbers or solve a small optimization slightly faster, but not anything that helps your business today.

Be cautious when a startup claims “quantum advantage” in something like finance or machine learning; ask for specifics. Often, it turns out they compared against a naive classical method on a toy problem. Real advantage means beating the best classical algorithms on a meaningful task – and we’re not there yet for broad problems.

As an example, I encountered a software startup claiming advantage in portfolio optimization. Upon scrutiny, their quantum heuristic solved a particular small problem, but a classical solver with some tuning did just as well. It wasn’t a fair fight. This is common: if someone says “our quantum X is 100× faster than classical Y,” demand to know on what problem specifically and whether an independent group has verified that claim. It might be a contrived benchmark or an assumption that classical competitors won’t improve.

In due diligence, I might even bring in a classical algorithm expert for a week to see if the quantum speedup holds when classical methods are optimized (spoiler: often the gap narrows or closes). In short, treat “quantum advantage” claims as unproven until demonstrated under strict scrutiny.


By clarifying these terms, I set a foundation for the rest of the evaluation. Misunderstanding metrics is one of the biggest pitfalls for non-specialist investors. I’ve seen board members equate a qubit count headline with progress, not realizing the device’s quantum volume (a holistic performance metric) might be low due to errors. I’ve also seen startups misuse terms – sometimes unintentionally – because even within the industry we lack totally standard definitions (IBM’s “Quantum Volume” and other metrics are attempts to bring consistency). So, don’t be shy about asking for definitions in a pitch.

A good founder should be able to translate techno-jargon into plain English and should welcome the chance to clarify. In fact, assessing how they explain coherence or error rates can tell you a lot about their transparency. If they hand-wave (“oh, coherence just means our qubits are high quality, no need to worry”), that’s a cue to worry.

If you want to dig deeper into these and other metrics, you should check out: The Path to CRQC – A Capability‑Driven Method for Predicting Q‑Day.

Spotting Exaggerated Claims in the Wild

Understanding the metrics is step one. Step two is catching how those metrics (or other buzzwords) get inflated into bold claims. Let me share a few realistic technical examples (drawn from multiple experiences) of how quantum claims can be exaggerated or misunderstood – and how I approach them:

The Magical Qubit Count

I once had a founder brag, “Our system has 256 qubits, more than anyone else!” On paper, that was true – but context is everything. Those were 256 physical qubits on an annealing device, not gate-model qubits, and with no error correction. The marketing phrased it like general-purpose qubits, which was misleading. This reminds me of D-Wave’s early claim in 2007 of the “first commercial quantum computer.” They did have a 16-qubit device and later sold machines to big companies, but it turned out to be a specialized quantum annealer, not a universal quantum computer. For years, whether those qubits provided any speedup was hotly debated. Calling it a “quantum computer” without qualifiers confused a lot of outsiders.

The lesson: big qubit numbers make headlines, but you must ask what kind of qubits and how they can be used. Pressed them: How many logical qubits might that be, effectively? Could they run a standard algorithm like Shor’s if given unlimited time?

Red flag: If a claim centers on a record number (“world’s largest quantum computer”), check the fine print. Often there are critical nuances (e.g. “photonic qubits” that are really just light modes, or “qubits” that only remain quantum for nanoseconds). As the saying goes, “Don’t buy the quantum horse just because it has more quantum legs” – what matters is how far it can actually run.

“We’ve Achieved Quantum Advantage (Trust Us)!”

A more recent case: a startup claimed “quantum advantage on a real-world problem”. Specifically, they said their quantum solver for a logistics optimization beat classical solvers. Impressive – if true. But when I probed for details, it emerged that the problem was extremely narrow and the comparison was against a baseline classical algorithm that wasn’t state-of-the-art. Essentially, they found a corner case where their quantum heuristic did well and called it “advantage”.

This is not uncommon. Many so-called advantages are on carefully contrived demos, not tasks businesses actually need solved. In the Google quantum supremacy experiment, the task (random circuit sampling) was deliberately chosen to be hard for classical but not useful for anything practical – and yet some media misinterpreted it as “quantum computer now faster than supercomputer” broadly. Similarly, startups might claim “optimization 100× faster” but the fine print is that it’s 100× faster on one specific dataset or a toy model.

The tell: if no peer-reviewed paper or independent benchmark is cited, and it’s just a glossy announcement, be skeptical.

What I do: I ask if they have published the results or if any third party has replicated them. I often request to see raw data or even to rerun the comparison. If they achieved a true advantage, they should be eager to have others validate it (that’s scientific gold). If they’re cagey or only offer vague results, likely the claim is overstated. One company’s bold advantage claim crumbled when we brought in an outside academic who tuned a classical algorithm and matched the quantum result – something the startup hadn’t considered. Remember, there’s academic quantum advantage (which is fine as a milestone) and practical advantage. We investors care about the latter, so we must dig until we understand which it is.

The Room-Temperature Miracle

Once in a blue moon, you encounter a truly extraordinary claim – like “we have a room-temperature quantum computer with thousands of qubits”. Such a claim strikes at the heart of known challenges (most high-performing qubits need ultra-cold dilution fridges, etc.) My immediate reaction: “If true, that overturns a lot of current state-of-the-art – so show me extraordinary evidence.” In one instance, a startup implied they’d achieved a stable quantum processor operating at ambient conditions. The skepticism antenna went way up. Sure, some qubit modalities (certain photonic or diamond NV-center qubits) can work at room temperature, but none have shown anywhere near the scale and fidelity to beat cryogenic systems. As my baloney detection post said, a claim like “a million high-fidelity qubits at room temp” is basically Nobel Prize territory and not something to believe from a mere press release. In my case, asking detailed questions revealed the startup’s device was more theory than practice – they had one qubit doing a trivial demonstration at room temp; scaling to thousands was pure projection.

The lesson: If a claim seems to ignore known physics constraints (e.g. no cooling, no vacuum, no error correction yet somehow huge performance), apply Occam’s Razor. The simplest explanation is usually that they’re mistaken or exaggerating, rather than that they miraculously outpaced IBM, Google, and every top lab in one go. I’ll politely ask them, “What do you say to experts who would find this claim hard to believe? Have you shared data with any renowned physicists for validation?” The reaction is telling. Credible folks will have some backing (even if preliminary), whereas pretenders might double down on hand-waving or “proprietary secret sauce.” In one memorable meeting, after I pressed for independent validation, a founder admitted they hadn’t yet demonstrated the effect outside simulations – a far cry from the original claim.

Independent Scrutiny vs. Black Box Boasts

A final pattern: how a company handles external validation. Serious quantum companies often welcome outside experts to test or review their tech. For example, IBM puts devices on the cloud for researchers, and those users will quickly call out any discrepancies in performance claims. IonQ, Rigetti, and others have published papers co-authored with university teams. If a startup says “we achieved X” and I ask “has anyone outside your team seen or verified this?”, the best answer is “Yes, we worked with XYZ university and they confirmed it, here’s the paper.” The worst answer is “Not yet, but trust us, it works.”

One red flag I look for: companies that operate entirely in a black box, sharing no data or access, just glossy demos. As I wrote before, “if a company’s device is a closed black-box and only they tout its miraculous results, caution.” In contrast, if they’ve let a respected professor or customer evaluate it freely, that’s a very good sign. A real-world example: Quantum Computing Inc. (QCI) once claimed to have a revolutionary photonic chip and even a whole photonic quantum foundry, but journalists later found it was largely smoke and mirrors. No independent person had seen this “foundry” – it was just a rented office, and their claims fell apart under scrutiny. In another case, IonQ faced short-seller allegations that their 11-qubit device’s performance was exaggerated and that a 32-qubit device didn’t exist yet. IonQ responded by pointing to their published technical reports and inviting more scrutiny. Because IonQ had a history of transparency (e.g. users could access their machine via cloud, and papers detailed their metrics), the community could push back on some of the short-seller’s extreme claims.

Takeaway: When doing diligence, I actively seek evidence of independent testing. I might ask to speak to a pilot customer or to see results from a reputable third-party lab. If the startup has nothing to show there, I consider whether it’s due to being very early (understandable) or because they are avoiding tests that might expose issues. As the maxim goes: trust, but verify – and in quantum, don’t even trust until verified.


By sharing these examples, I hope you see a pattern: we must read quantum startup claims with a scientist’s eye and an investigator’s persistence. It’s easy to be impressed by a slick pitch, but with a few pointed questions, the reality often emerges – whether it’s still impressive (sometimes it is!) or far more pedestrian. My approach is not to play “gotcha” with founders, but to signal that evidence and honesty matter more to me than lofty claims. The good ones understand this and engage gladly; the pretenders find excuses. This naturally leads into the structured framework I use to ensure I’m covering all bases in an evaluation.

The Five Lenses Framework for Quantum Due Diligence

Over time, I found that evaluating quantum ventures requires looking at them from multiple angles. I’ve boiled it down to five “lenses” that together give a comprehensive picture of a startup’s viability. These are (1) Physics Maturity, (2) Engineering Reproducibility, (3) Manufacturability, (4) Standards & Compliance, and (5) Commercial Realism. Let’s delve into each:

1. Physics Maturity

This lens asks: Is the underlying science proven to the level needed, or is it still speculative? In quantum tech, sometimes a company is betting on a novel physical phenomenon or an unproven theory. Other times, they’re using known physics but pushing it to new regimes. As a diligence question, I probe how much new physics is required for their plan to work. Are they using a well-demonstrated qubit modality (like superconducting circuits or trapped ions, which multiple labs have built and characterized), or something exotic like topological qubits or quantum effects that no one has reliably demonstrated yet?

A high score (Green) in physics maturity means the scientific principles are sound and widely validated. For instance, trapped-ion qubits: the physics (ion trapping, laser gate operations) has decades of research and many groups can do it. A low score (Red) would be a startup claiming a brand new quantum phenomenon as the basis of their tech – say, a room-temperature superconductor qubit or Majorana fermion qubits – which even academic consensus hasn’t reached. For example, Microsoft spent years on topological Majorana qubits before finally detecting even a hint of the phenomenon; it’s very cutting-edge physics. If a new startup now says “we’ll use Majorana qubits and have a topological quantum computer in 2 years,” I’d mark physics maturity as red and demand extraordinary evidence.

In practice, I ask questions like: What publications or experimental results does this approach build on? Can they cite independent groups who have achieved similar basics? If their whole project hinges on a single lab result that only they have seen, that’s a concern. Conversely, if they’re using, say, superconducting qubits, I know the physics is well understood (cooper pair boxes, Josephson junctions, etc.) – the open challenge there is engineering scale, not whether qubits even work. Physics maturity also means understanding the limits: e.g., a given qubit type’s coherence time, error mechanisms, or quantum noise characteristics. If a company’s goal would require breaking a known limit (like significantly exceeding the coherence times achieved in all other experiments of that type), I need to know why they think that’s possible.

A real-world example: I evaluated a quantum sensor startup using NV centers in diamond. Physics lens: NV center quantum properties are well-known in labs (good). But they aimed to sense something at a sensitivity 100× better than published results – essentially pushing the physics to a new extreme. I flagged that as “amber” risk: physics is understood, but reaching that performance is unproven. I then asked if any theoretical analysis showed it was feasible, or if they had early data. We structured the investment with a milestone: show a prototype that beats the state-of-art by at least 10× (to confirm the physics can get there) before releasing the next tranche of funds.

In summary, under Physics Maturity I want to know: Are nature’s laws (as we know them) on your side? Or are you betting on a scientific breakthrough as well as an engineering one? Quantum computing already pushes physics – don’t make it harder by assuming new physics will magically appear. Stay grounded in known science and require evidence for any claims that go beyond it.

2. Engineering Reproducibility

This lens evaluates whether the technology can be reliably engineered and repeated. Quantum experiments are notoriously finicky. A one-time success in a lab (perhaps achieved at 2 AM by a grad student tweaking for weeks) is wonderful science, but can it be repeated on Monday at 9 AM, by someone else, or ten times in a row without fail? Reproducibility is the hallmark of solid engineering.

I examine a startup’s track record (if any) of replicating results and their methodology for testing. Good signs (Green) include: multiple runs of an experiment showing consistent outcomes; building a second device that performs similarly to the first; or external groups replicating their core result. Another good sign: the company has clear calibration and quality control procedures, indicating they’re moving from “science experiment” to “engineering process.” For instance, IBM’s team regularly publishes metrics of their devices over time, showing improvement and consistency – that gives confidence that it’s not just luck.

Yellow or red flags in engineering reproducibility: The startup’s tech works only in their lab under very specific conditions, or it requires constant manual tuning. Or perhaps they achieved a key milestone once, but haven’t repeated it. I often ask, “How many times have you run this algorithm on your hardware, and what’s the variance in results?” If they say “we ran it once and got a great outcome!”, that’s not convincing – quantum outcomes can vary due to randomness or noise. I also ask if they’ve tried a slightly scaled-up version and whether results were similar. Often, initial success doesn’t translate when you add even a few more qubits or run longer – that’s an engineering reality check.

One concrete diligence step: look for independent replication. As mentioned earlier, if no one outside the company has tested the claims, that’s concerning. Conversely, if they put their device on a cloud platform or had a university team test it and the results matched, that’s huge. I recall a case where a hardware startup collaborated with a national lab: the lab verified the device’s coherence and gate fidelity claims. That external stamp of approval moved my comfort from amber to green on reproducibility.

I’ll cite a scenario: A quantum computing startup claimed they could entangle 10 qubits in a unique way. I asked if anyone else had done it with their setup or if they’d repeated it. They admitted it was done only once as a proof of concept. That became a key diligence point – I considered their engineering reproducibility low. We set a due-diligence test: in the next month, repeat the 10-qubit entanglement 3 times and show us the data. If they couldn’t, it would indicate the result might have been a fluke or extremely sensitive to conditions. (In that case, they did manage to repeat it twice more, boosting our confidence somewhat).

Ultimately, this lens is about moving from “it worked once in a lab” to “it works reliably on demand.” It’s the difference between a science project and a product. I often say to founders: “Show me that your quantum device is becoming as boringly dependable as a classical one.” Not literally there yet, but trending that way.

3. Manufacturability

Say the physics is sound and you can repeat the experiment – now, can you build it at scale? Manufacturability is a huge filter in quantum tech. Many impressive demonstrations fail to translate into something you can build 100 or 1000 of, or something that can be manufactured outside a heroic lab environment.

When I examine manufacturability, I’m thinking about scaling in quantity and complexity. Key questions: Does the design rely on extremely specialized fabrication processes or materials? Can it leverage existing manufacturing infrastructure, or do we need a whole new factory/technique? If a company’s approach can piggyback on the semiconductor industry, for example, that’s a big plus. We saw a milestone in 2022 where researchers fabricated silicon spin qubit devices in the same factories that make normal chips – a step toward leveraging industrial-scale processes. As an investor, I love to hear phrases like “we use standard CMOS processes” or “our components are made in a commercial fab”. It means if demand grows, they can potentially produce more without a 10-year build-out.

I also consider yield and assembly. Quantum systems often involve delicate assembly of subcomponents (lasers, vacuum chambers, cryostats, etc.). If each machine is essentially hand-crafted by PhDs, that’s not scalable. I look for whether the startup is thinking about modularity and assembly automation. Are they designing with mass production in mind? Sebastian Weidt of Universal Quantum (a quantum CEO) noted that “the future hinges on engineering, not hype – building machines designed for manufacturability and modularity from day one, using standard industrial processes”. That resonates strongly with this lens.

A practical check: if a startup uses superconducting qubits, do they have access to an advanced nanofab to make their chips, and can that process be scaled to larger wafers or more qubits per chip? For ion traps, are they using semiconductor-style trap chips (many do nowadays) that a vendor can produce, or are they using one-off lab traps? For photonics, are they using readily available optical components or custom-made nonlinear crystals that only one supplier provides?

Manufacturability also extends to supply chain and reproducibility of manufacturing. For example, some superconducting qubit designs require very low material impurities; if only one lab in the world can make that material pure enough, that’s a bottleneck. I learned this the hard way: one startup needed a special vacuum wafer bonding for their qubits – only a couple of machines in the world could do it, which meant long queues and no guarantee of scale. We identified that as a risk (amber at best) and the startup eventually pivoted to a more standard process.

Another aspect: packaging and control electronics. A quantum chip with 1000 qubits might need 1000 microwave control lines. Can they physically wire that up? Are there efforts to integrate or multiplex signals? If not, the “fridge spaghetti” problem (hundreds of coaxial cables going into a cryostat) will kill scalability. I always ask how they plan to control and read out more qubits as the device grows. A manufacturable design often includes clever engineering for control (like on-chip multiplexing, or photonic control, etc.). A company that hasn’t thought about that is looking at a brick wall down the road.

I recall touring a quantum lab and seeing a beautiful one-of-a-kind apparatus – my thought was “great, but can you build 10 of these for a customer?” Often the answer is no, not yet. So we structured our deal to fund the transition from “one of a kind” to a more modular setup that could be replicated.

In short, the Manufacturability lens is about turning science into products at scale. If physics is the heart and engineering is the muscle, manufacturability is the skeleton – without it, the thing can’t stand up in the real world. I look for alignment with existing industries (semiconductor fabs, optical telecom components, etc.) and a clear plan to reduce the artisanal lab work.

A quote I keep in mind: scalable quantum computing will likely require using the same factories that built the classical computing revolution. If a startup aligns with that, they score high here; if they seem oblivious to manufacturing challenges, they score low and I advise bringing in some industrial engineering expertise sooner rather than later.

4. Standards & Compliance

This lens might sound odd in a nascent field – what standards? But it’s an important perspective: How well does the startup adhere to or influence emerging standards, and are they compliant with relevant regulations or industry norms? Essentially, is the company playing nicely with the ecosystem, or are they off in a corner with proprietary metrics and ignoring interoperability?

In quantum computing hardware, formal standards are still minimal (though groups like IEEE and the Quantum Economic Development Consortium (QED-C) have working groups defining terminology and metrics). However, de facto standards are appearing: for example, IBM’s OpenQASM for quantum programming, or IBM’s Quantum Volume metric for performance benchmarking. If a hardware startup refuses to report any standard metrics (like two-qubit fidelity, coherence time, quantum volume) and instead touts a proprietary “quantum excellence score,” that’s a red flag. I will ask: Why not use common benchmarks to show progress? If their claims are real, they should be able to express them in standard terms. As Aspen Quantum Consulting notes, the lack of standardized metrics makes due diligence harder, but efforts like IBM’s Quantum Volume are trying to fill the gap. I check whether the startup references any of those efforts. A good sign is if they say “we measure quantum volume X” or “we meet the DiVincenzo criteria for a quantum computer” or “we’re participating in QED-C benchmarking exercises.” It shows they’re engaged with the community and not just marketing on their own terms.

For quantum cryptography or post-quantum cryptography (PQC) startups, standards are absolutely crucial. NIST is in the process of standardizing PQC algorithms. So if I evaluate a PQC company, I ask if their solutions align with NIST’s candidates or if they plan to comply with standards like FIPS when they’re set. In cybersecurity, standards compliance will be a driving factor. A PQC startup not following NIST’s work would be a non-starter; compliance to those standards (once finalized) will be mandatory for winning enterprise and government contracts.

For quantum sensors or other devices, compliance can mean meeting regulatory requirements in their target industries (like medical device approval if it’s a biomedical sensor, or FCC regulations if it’s a communication device). I check if they’re aware of those hurdles.

Additionally, safety and export controls come in. Certain quantum tech (like advanced encryption or sensors with military potential) might fall under export control laws (e.g., ITAR or the new U.S. controls on quantum computers above certain qubit counts). A savvy startup will know this and have a plan (or at least acknowledge it). If I mention export control and they stare blankly, that’s a governance red flag.

Another aspect: data and integration standards. If the startup provides a quantum cloud service, do they integrate with common cloud platforms or APIs? If hardware, do they offer an API or interface consistent with what users are used to (like Qiskit or Cirq compatibility)? This isn’t “compliance” in the legal sense, but it is about standards and interoperability which affect adoption.

Finally, I consider industry consortium involvement. Are they active in quantum industry groups, open-source projects, or standardization efforts? A small startup might not prioritize this, but the ones that do earn points – it shows maturity and that they’re helping shape the ecosystem (or at least keeping abreast of it). For example, if a quantum network company aligns with the ETSI standards on QKD, or if a quantum computing startup is adopting the new IEEE definitions for quantum terminology, it indicates credibility.

In summary, Standards & Compliance is about future-proofing and trust. A company that embraces standards is less likely to be cutting corners or overselling, and more likely to produce something that customers and regulators will accept. It also mitigates some risk: imagine investing in a quantum encryption scheme that turns out not to meet the final standards – that investment could go to zero if everyone moves to a different scheme. So I push founders on how they’re future-proofing their tech in regard to standards. This lens often doesn’t make or break an early-stage deal, but it provides insight into the company’s foresight and integrity. A mature answer here can turn my internal dial towards “yes, they get it.”

5. Commercial Realism

Last but definitely not least: Commercial Realism. This lens looks at the business side and asks, “Does the timeline to commercial value make sense? Is the company’s go-to-market and revenue model realistic given the state of the technology?” Quantum tech is infamous for a long horizon to practicality. So I scrutinize whether the startup is acknowledging that or papering it over.

Several things I evaluate:

Use-case alignment

Are they targeting an application that quantum tech can plausibly impact in the near-to-medium term? Or are they claiming they’ll disrupt general-purpose computing or break all encryption in the next year? For example, a startup that offers quantum computing cloud access knows the near-term customers are researchers and experimenters (which is realistic). One that says “we will solve world hunger with quantum AI in 2 years” is clearly not grounded. I appreciate when a team can articulate a path: e.g., “First, we’ll serve quantum chemists who need small molecule simulations (which our 50-qubit device can handle with advantage). In parallel, we advance the tech for larger problems by 2028.” That shows commercial realism – near-term niche value, long-term vision. If instead they say “Next year we’ll outperform classical supercomputers on all optimization problems,” that’s fantastical.

Revenue and traction

Where is their money coming from now, and does that make sense? In quantum, a lot of early revenue is from government grants or research collaborations (because true product revenue is sparse). That’s okay – I just want it clearly disclosed. I get wary if a startup claims big “customers” but on closer look those are actually just participating in a pilot or LOIs, not paying.

Another trick: some quantum startups offer consulting services (educating companies on quantum, etc.) to make near-term money. It’s not a scalable business, but it’s bread and butter while tech matures. I don’t mind that if they’re honest about it. Actually, a red flag is if all their “revenue” is consulting or workshops, yet they pitch themselves as a product company – it might mean the product is nowhere near ready. Many quantum companies today survive on government funding and “quantum consulting” gigs. I always separate those soft revenues from real product-market traction.

Timelines and Milestones

Probably the biggest issue. Typical tech startups might promise a product next year – and often deliver an MVP. In quantum, if someone promises a revolutionary capability in 1-2 years, I ask for their technical roadmap and look for realism. Does the timeline have intermediate milestones that are measurable? E.g., “By Q2 next year we aim for 99.5% 2-qubit fidelity on 5 qubits; by Q4, scale to 20 qubits with >99% fidelity; by 2026, integrate error correction for 1 logical qubit.” That is concrete (if ambitious).

On the other hand, a timeline that says “2024: 100 qubits; 2025: 1000 qubits; 2026: 1 million qubits and full fault tolerance” is basically science fiction. I’ve literally seen pitch decks with that kind of curve – presumably to wow investors. I mark those as Red for commercial realism unless the team has a miraculous explanation.

There’s a noted lack of clarity in the quantum industry about what real success looks like and how long it takes, which leads some companies to over-promise to VCs and then get stuck. I look for a founder who’s willing to say, “It will likely take 5-7 years to achieve X, and here’s why we think the market will wait or why we can survive until then.” Honesty here is very valuable.

Burn rate vs runway

Because quantum hardware is expensive, I check if they have the capital to actually reach their next technical milestone. A commercially realistic plan will align fundraising with technical risk-down events. If they need another $100M to get anywhere close to a saleable product, that’s a concern unless they have clear avenues to that capital (e.g., government programs, big partnerships). Are they planning to gate progress with milestones (which I prefer)? For instance, maybe do a smaller $10M raise now to prove a concept in 18 months, then a big raise. Versus a naive plan that just says “give us $50M now and trust us for 4 years.” I often impose or suggest stage gates: specific outcomes that, if not met, we stop or pivot. This is not unique to quantum, but quantum almost demands it due to high uncertainty.

Market readiness and customer input

Are there early adopters engaged? If a quantum startup claims a certain market application (say quantum ML for finance), I expect they’ve at least spoken to potential users in that space. If they have a design partner or pilot project with a credible third party, that is great evidence of commercial grounding. If not, I might connect them with an industry contact to get feedback. I have seen projects pivot after real customer feedback (e.g., realizing their solution needed to be an on-premise hardware offering rather than cloud due to a bank’s data policies – something they hadn’t thought of, which affected go-to-market).

In essence, Commercial Realism is about aligning the hype with actual execution and value. It’s where I sanity-check the business plan against the technology reality. I sometimes bluntly ask: “What will you realistically be able to sell in 2 years, and to whom? And what will they do with it?” If the answers are vague or utopian, that’s a problem. On the flip side, I appreciate teams that are frank: e.g., “Our quantum computer won’t outperform classical ones for 5 years, but in the meantime we offer cloud access for research and will build an ecosystem of algorithms, so when it’s ready we have a user base.” That is a viable approach many are taking.

One more thing under this lens: exit strategy and market timing. Quantum may take a long time, so investors need to consider, will this company produce intermediate value (IP, smaller products, strategic partnerships) that could allow an exit or at least continued funding? If their plan is just “we’ll solve it then sell for billions,” that’s not enough. Commercial realism means acknowledging quantum might be a marathon, not a sprint – and planning accordingly. I often discuss with co-investors the risk of a “quantum winter” if hype crashes, and whether this startup can survive such a downturn by having tangible progress or revenue.


Combining these five lenses – Physics, Engineering, Manufacturability, Standards, Commercial – gives a structured way to map out a quantum startup’s strengths and weaknesses. I sometimes create a table or scorecard to rate each (Red/Amber/Green). For example, a certain photonic quantum computing startup I looked at scored: Physics (Green, based on proven photonics theory), Engineering (Amber, as their interference visibility was inconsistent), Manufacturability (Green, since they use semiconductor photonics fab), Standards (Amber, as they used some proprietary protocols but planned to adopt standards), Commercial (Amber, a long road to revenue but a clear niche first). That kind of breakdown helped our investment committee understand where the uncertainties were.

Below is an example cheat-sheet table mapping common claims to the kind of evidence I request and how I verify them, which touches on all these lenses:

Common Claim from Startup Evidence to Request How to Independently Verify
We have [X] qubits – the most in the industry. – Clarify physical vs logical qubits.
– Ask for qubit specs: coherence time, error rates, connectivity.
– Any demonstration of using all qubits together (e.g. entangling all, running a circuit on all)?
– Check if independent benchmarks (quantum volume, etc.) have been reported.
– See if a reputable lab or user has accessed the device (cloud or collaboration) and published results on those qubits.
– If only physical count is high but errors are high too, consider the effective quantum volume.
Our qubit fidelity is 99.9% (world-class). – Specify: 1-qubit or 2-qubit fidelity? Average or best-case?
– Provide calibration data or randomized benchmarking results.
– Any peer-reviewed paper or third-party measurement of these fidelities?
– Look for publications or talks by independent researchers that used their device and reported error rates.
– If possible, have an expert review their benchmarking methodology (to ensure no cherry-picking).
– Compare against known industry figures (does it significantly beat IBM/Google published numbers? If yes, why isn’t it published?)
We achieved quantum advantage on problem Y. – Detailed description of problem Y and why it’s hard classically.
– What classical baseline was used for comparison? (Request their data on classical vs quantum performance).
– Ask if results are published or at least on arXiv, and if not, why.
– Request to speak with a domain expert or customer who witnessed the result.
– Attempt a classical reproduction: engage a consultant or use improved algorithms to see if classical catches up (often revealing if the claim is overstated).
– Check if any neutral expert (professor, etc.) has reviewed their claim publicly.
– Verify the problem isn’t a contrived toy: e.g., if it’s factoring  RSA-15 or something trivial, that’s not real advantage (so verify problem size and importance).
We have a working error-corrected qubit/logical qubit. – Ask for specifics: which QEC code, how many physical qubits per logical, and what is the logical error rate achieved?
– Has the logical qubit’s error been measured to be lower than physical qubits’ error (break-even)?
– Any publication (Nature, etc.) or independent validation of the QEC result?
– Check literature: e.g., did this align with known feats (Google’s 49 qubit surface code test, etc.) or go beyond?
– Consult a quantum error correction researcher to evaluate if their approach and claims hold water.
– If possible, review their syndrome data or have an outside group do a code test on their setup.
Our device runs at room temperature, unlike competitors. – Evidence of performance at room temp: coherence times, gate fidelity measured at ambient conditions.
– What trade-offs? (Often room-temp means using photons or diamond NVs – then ask about qubit count and gate speeds achievable).
– Any published proof-of-concept at room temp solving a non-trivial task?
– Physics check: get an expert opinion on whether known physics supports quantum coherence at that scale without cooling.
– Possibly replicate a simple benchmark at room temp vs low temp to see performance gap.
– Ensure they’re not just simulating quantum on classical hardware at room temp (it has happened!)
We can easily scale to [N] qubits in 2 years. – Ask for the scaling plan: what engineering steps from current number to N? Do they need to develop new control electronics, fabricate new chips, etc.?
– Any partnership with a fab or manufacturer to support scaling?
– Show a roadmap with intermediate scaling milestones (and what has been achieved so far on that roadmap).
– Compare with industry trajectories (if they claim a jump far exceeding e.g. IBM’s roadmap, require explanation).
– Verify manufacturability: maybe have a semiconductor fabrication expert assess their chip design and scaling feasibility.
– Check their team/hiring: do they have people experienced in scaling hardware production? Lack of such talent could be a sign the claim is wishful.
We have customers and revenue already. – Request specifics: names of customers or at least sectors, nature of engagement (paid pilot, recurring contract, one-off consulting?).
– If possible, reference-check one or two customers under NDA to hear their perspective on the product’s maturity.
– Break down revenue: how much from product vs grants vs services.
– Independently verify any big partnership announcements (often, one can reach contacts in those organizations).
– See if the purported customers have publicly spoken about the collaboration – if not, why?
– Ensure that revenue claims aren’t mostly government grants labeled as “contracts.” (It’s fine to have grants, but it’s not market validation in the commercial sense.)

This table is not exhaustive, but it covers many claim types I encounter and how I handle them. The pattern is clear: for each claim, I ask for evidence and then think of a way to verify through a third party or independent means. Sometimes that involves hiring an expert consultant for a quick look; other times it’s about comparing against known public data. The goal is to not take any bold claim at face value. As Carl Sagan said, “extraordinary claims require extraordinary evidence,” and in quantum tech I’ve found that to be a daily mantra.

Scoring and Gating Progress (RAG Assessments in Quantum)

After applying the five lenses and gathering evidence for specific claims, I like to score the findings in a simple way: Red, Amber, or Green (RAG). For a quantum due diligence, a RAG scorecard might look like:

  • Physics Maturity: Green (well-demonstrated superconducting physics)
  • Engineering Reproducibility: Amber (results shown but only on one device, needs independent replication)
  • Manufacturability: Green (uses standard fab processes, partner in place with Intel’s foundry)
  • Standards/Compliance: Amber (benchmarks reported but using a custom metric; plan to adopt industry API)
  • Commercial Realism: Red (timeline to revenue very optimistic, no clear interim products)

I literally have made tables like that in internal memos. It highlights the red flags immediately – in the example above, commercial realism is Red, meaning if we proceed, we must address that (maybe with contract terms or by adjusting the business plan). A Red in any critical category is a big deal. In a few cases, a single Red (like Physics Maturity being red because the concept is unproven) was enough for us to pass on the investment, unless the team had a very convincing mitigation (and even then, it might turn into an amber with a milestone).

What do RAG assessments mean in quantum specifically? They’re a way to quantify risk in a field full of uncertainties. For instance, a startup might be Green on physics (no fundamental unknowns) but Red on engineering (they haven’t built a second prototype that works). Another might be Green on tech but Red on commercial (no market in sight). As an investor, those are two very different profiles: the former might need an expert engineering lead and some time to turn Red to Green; the latter might be more about refocusing the market strategy or waiting for the market to mature.

We also use RAG scores to communicate with our investment committee or co-investors who may not be quantum experts. It’s a concise way to say “Here are the risk areas.” For example, “Team X is Green in tech but Amber in go-to-market because the industry adoption might be slow – we propose structuring the deal to accommodate a longer runway.” It’s much better than a binary yes/no, as it fosters discussion on how to manage the ambers and reds.

Speaking of managing: gating progress is a crucial strategy in quantum deals. Given the uncertainties, we rarely just give a lump sum and cross our fingers for five years. Instead, we tie funding or partnership progress to achieving certain milestones (gates). For instance, an initial seed might fund 18 months to demonstrate a 2-qubit gate at a certain fidelity. If they achieve that (turning an engineering Red into Amber/Green), then the next round funds scaling to 5 qubits with minimal loss of fidelity, etc. These are often formalized in milestones in term sheets or project contracts.

A “RAG” style can also be applied as ongoing monitoring: during the project, we keep updating RAG status. If something turns from amber to red (say, a technical approach fails to pan out and they’re scrambling for Plan B), that might trigger a serious discussion or a pivot. On the flip side, turning ambers to green (hitting a key milestone) is cause for celebration and often public announcements.

One peculiarity in quantum is that sometimes a category remains red for a long time and that’s expected – for example, full error correction might remain Red (not achieved) for the entire early life of a company. That doesn’t mean the company fails; it’s just a known long-term challenge. So we also denote which reds are “fatal” vs “manageable if progress seen.” A fatal red might be something like “physics principle not actually real” – yeah, that’s fatal. A manageable red is “quantum advantage not yet achieved” – as long as they’re making steady progress, that’s okay, we didn’t expect full advantage from day 1.

In short, RAG scoring in quantum helps to keep everyone realistic. It forces an admission of what isn’t solved yet (preventing self-delusion). And it aligns investor and founder on where help is needed. For example, if manufacturability is Amber due to lack of a manufacturing engineer on the team, we all agree to prioritize hiring one – turning that to green faster.

To illustrate gating, let me share a typical plan we did with a quantum computing hardware startup:

  • Gate 1 (6 months): Demonstrate a two-qubit entangling gate with fidelity >90%. If achieved: release next tranche of funding. If not, either stop or renegotiate if other aspects are promising.
  • Gate 2 (12-15 months): Scale to 5 qubits and run a small algorithm (e.g. Bernstein-Vazirani or a 3-qubit QFT) successfully. Also, publish results or get third-party validation. If achieved: green light Series A raise with our support. If not, maybe the science is harder than thought – consider pivot or extension with a smaller bridge funding.
  • Gate 3 (24 months): Reach 10+ qubits integrated, with at least 1 logical qubit encoded (even if very short-lived). Secure at least 1 paying beta customer (e.g., a research lab buying access). If achieved: we double down investment for scaling. If not, reassess viability or exit strategy (maybe IP sale).

Each gate corresponds to turning some of our ambers to green. For instance, Gate 1 addresses Physics/Engineering (can you actually do a good gate? If yes, physics/engineering risk drops massively). Gate 3 addresses Commercial (customer interest) and some physics (logical qubit concept). This staged approach allowed us to fail fast if something fundamental went wrong, or to continue with confidence as risk retired.

I strongly advocate this gated approach to fellow investors in quantum. It’s common in pharma and biotech (hit clinical trial milestones or stop) – deep tech should be similar. It also helps avoid the scenario where hype carries a startup to a huge valuation before it’s actually proven anything; that can end in tears (and a quantum winter). Instead, measured, gated progress can build value sustainably.

One-Week Due Diligence Tests: Exposing Risk Early

When time is short and I have to evaluate a quantum opportunity quickly (say we have a week to decide on a seed investment or join a round), I deploy some fast and frugal tests. These are like mini-experiments or research sprints that can reveal a lot of red flags in just days. Here are a few of my go-to one-week due diligence tests:

  • Literature & Patent Deep-Dive (1-2 days): I (or an expert I hire) spend a day looking up any papers or patents by the team or closely related to their claims. If the startup claims a new algorithm, has it been published or at least posted on arXiv? If it’s hardware, what did their academic thesis or prior work show? Often, startups emerge from academia, so there’s a trail. In a crunch, I’ve had a postdoc literally replicate the math of a proposed quantum algorithm to see if it’s sound – in one case we found it only worked by assuming an unrealistic oracle. That saved us from a likely dud. This kind of desk research can often be done quickly and will either reinforce credibility (if you find solid prior work) or raise questions (if nothing exists or you find contrary results).
  • Quick Expert Consults: In under a week, you can get at least one knowledgeable person’s opinion. I maintain a network of quantum researchers and engineers (and honestly, most serious investors in this space do). With a few emails, I can schedule a 30-minute call for an expert to sanity-check the idea. I’ve done calls with a topological quantum academic to ask “does this approach even make sense in theory?”, and with a former IBM engineer to ask “do these claimed error rates seem plausible?” You’d be amazed how a few pointed questions to an expert can cut through hours of my own analysis. Sometimes they’ll say “Yes, that group is legit, I saw their conference presentation and it was solid.” Or “No, if someone says that, they likely haven’t dealt with X noise source at all.” I always verify confidentiality and avoid sharing any proprietary info beyond what’s public or approved, of course. But experts often know the reputation of a team or have tried similar experiments. This social proof (or lack thereof) is very informative.
  • Reproduce a Key Result (on paper or sim): If the startup is algorithm/software-focused and they have a claimed result, we try to replicate it on a small scale. For example, a quantum machine learning startup said their algorithm could classify data better than classical. We took their published figures and ran a similar test with a classical model (or even a simple quantum sim) to see if the claim held. In a week, you might not fully reproduce everything, but you can validate trends. In that case, we found the classical baseline they compared to was ridiculously weak; a better classical method in our quick test already beat their quantum results. That indicated the “advantage” was more of a straw man. This test cost basically some compute time and a grad student’s effort, but it was decisive.
  • Ask for a Live Demo or Data Dump: In diligence, I sometimes ask the startup to show me, not just tell me. In one hardware case, we set up a video call where the team live-demonstrated their quantum device performing a simple algorithm. I wasn’t expecting perfection, but I wanted to see how they operate it, how stable it was, etc. In a week, you can often arrange a demo or at least get raw data logs from a recent run. Reading raw experimental logs (timestamps, error rates over time, etc.) can reveal issues like instability or excessive manual tweaking. If they claim high uptime, I’ll see if the log matches that or if there were many resets. One-week is short, but even a half-day demo can speak volumes. If a team can’t set up any kind of demo in a week, either they’re extremely early (fine, if expected) or maybe things aren’t as ready as they claim.
  • Customer Feedback Calls: If the company says they have pilot customers or partnerships, in a week I can often get on phone with one of them (with permission, usually facilitated by the startup). A 30-minute call with a “customer” can validate whether they actually used the product and found value. I recall a case where a startup claimed a big automotive company partnership; we spoke with our contact there and learned it was just a signed MoU with no activity yet – the startup was stretching the truth by calling it a partnership. That tempered our view of their commercial progress. Conversely, another time a pharma researcher told us “Yes, we’ve been trying their quantum chemistry platform and it actually gave some interesting results for small molecules” – very encouraging to hear an end-user confirm that. These calls can be done quickly if everyone cooperates.
  • Internal Brainstorm Red-Teaming: I gather my team (and sometimes the startup team too) and do a focused “pre-mortem” exercise: assume the startup fails in 2 years – what likely caused it? In one hour, we list potential failure modes: “decoherence too high,” “classical improved faster,” “customer uninterested,” etc. Then we rank them. This isn’t exactly a test, but it surfaces the biggest risks. With those identified, we can target remaining diligence questions or conditions on those points. For example, if “classical catch-up” is a big worry, we might decide to test that more (like hire a classical optimization expert to examine it). If “decoherence” is a worry, we check the latest physics papers for any fundamental limit. This exercise basically uses our collective knowledge to simulate what could go wrong, and then we try to invalidate those concerns within the week if possible (or accept them if they’re inherent).
  • Small Scale Trial: In rare cases, if the startup’s product is accessible (say a cloud API), we try it ourselves. One week is enough to run a few jobs on a quantum cloud service or use their SDK if they have one. This is more applicable to software. We did this with a quantum algorithm API: signed up, ran a known algorithm, and analyzed the output. We discovered the developer experience was rough and results inconsistent, which we fed back to the startup (and into our decision). It’s akin to test-driving the product.

These quick-and-dirty tests are not foolproof, but they often flush out the biggest exaggerations or unknowns early. They also demonstrate to the founders that we do our homework. A diligent investor presence can actually encourage the startup to be more forthcoming (“these folks will check, so let’s be honest upfront”). It sets a tone of seriousness.

One should also be open to positive surprises in a week. I’ve gone in skeptical and by Friday been more optimistic because the team provided solid evidence or experts said “no, this approach is actually very clever.” The short diligence tests can increase confidence as much as they can reveal flaws.

In quantum, I’d say never skip at least some of these quick tests. Traditional startups might get away with just market sizing and team interviews in diligence, but for quantum the technical risk is too high. A few well-chosen technical tasks in that first week can save you from investing in something that just doesn’t work. Think of it as stress-testing the startup’s claims in a sandbox environment – better do it now than find out the hard way later.

How Quantum Diligence Differs from Classic Tech Diligence

It’s worth reflecting on how evaluating a quantum startup differs from evaluating a more traditional tech startup. Having done both, I can highlight some key differences:

  • Deep Science vs. Shallow Tech: In most software or internet startups, due diligence focuses on market fit, growth metrics, competitive landscape, maybe code quality – rarely do we question the laws of physics or whether the product is even feasible. In quantum, a huge part of diligence is essentially scientific peer review. You’re checking if the science holds up. This requires a different skill set – as mentioned, often consulting with PhDs or having one on your team. As an investor, I had to dust off my physics textbooks and stay current with research in a way that I never have to for, say, an e-commerce startup. Misjudging technical feasibility in quantum is far easier than in normal tech. In classic software, if three smart coders say they can build an app, they probably can. In quantum, three brilliant physicists might still be chasing a mirage if nature says “nope.”
  • Lack of Benchmarks and KPIs: Traditional startups have plenty of standard metrics (ARR, user growth, CAC, etc.) In quantum, what’s the KPI? Qubit count? Fidelity? Algorithms solved? The industry is still debating how to measure progress. IBM’s Quantum Volume is one attempt to give a single number – and IonQ uses qubit # × fidelity to boast equivalent qubits. But these are not as straightforward as, say, server throughput or clicks. The Aspen consulting notes highlight the absence of standardized metrics makes due diligence tricky. So we often have to create custom KPIs per deal. I find myself defining what success looks like for each startup individually (e.g., “if they can factor a 5-bit number by end of year, that’s a sign of progress”). There’s a lot more subjectivity and expert judgment needed, versus plugging numbers into a financial model.
  • Longer Time Horizons & Uncertain Timelines: Most tech startups aim to get product-market fit in a couple of years. Quantum companies might be doing 5-10 years of R&D before the market truly materializes. This greatly affects diligence – you must think in terms of technology roadmaps and even macro trends (like when will a quantum computer break RSA?). Investors must be more patient and comfortable with uncertainty. We often rely on things like technology readiness levels (TRLs) or expert surveys (e.g., the famous Mosca’s quantum timeline) to sanity-check timelines. Traditional VCs might balk at a company saying “we’ll have significant revenue 5 years from now,” but in quantum that could be realistic if the payoff is huge. Thus, quantum diligence is as much about evaluating staying power (does the team have a plan to survive the long winter? Are they appropriately capitalized or grant-savvy?) as it is about short-term traction.
  • Higher Technical Bar for Investors: To be blunt, a VC could invest in a social media app without knowing how to code – they’d look at user metrics and market trends. In quantum, an investor who doesn’t grasp at least the basics of qubits, decoherence, error correction, etc., is at major risk of being snowed by hype. I’ve seen generalist investors excitedly quote a startup’s press release that “they achieved 99.999% fidelity,” not realizing it applied to a single isolated operation under conditions that don’t scale. This is dangerous. As an investor group, we had to educate our whole team on quantum fundamentals. We brought in advisors. We essentially treat quantum deals more like biotech deals – where you’d bring in scientists to evaluate drug candidates. If you’re reading this as a non-technical decision-maker: identify knowledgeable people you trust, and involve them. Diligence in quantum often requires reading scientific papers or at least having them translated for you. That’s a big difference from normal tech due diligence.
  • Hype and Skepticism Imbalance: Every emerging tech has hype, but quantum has a weird mix of extreme hype and extreme skepticism out there. Some people think it’s all baloney; others think it solves everything magically. Typical tech due diligence doesn’t involve debunking pseudoscience (except maybe in some AI startups). In quantum, we literally have to filter out “quantum woo” – products that misuse the word quantum for marketing. I’ve had to determine if a startup is legit science or borderline scam. That’s usually not a question when evaluating, say, a new SaaS tool (the code may be buggy, but it’s not violating physics!). Quantum due diligence often starts at “is this even real or snake oil?” before moving on to normal business questions. In this sense, quantum investors must themselves be skeptics-in-chief, armed with the kind of questions I listed earlier to detect nonsense.
  • Role of Government and Academia: Quantum startups interact with academia and government far more than a typical startup. Many are spin-outs from university labs. Many rely on grant funding or joint projects with government labs (NASA, DoE labs, etc.). In diligence, that means we often check their academic credibility (publications, reputation of the professors involved) and their ability to win grants. A founder who is a PhD with Nature papers is a positive indicator in quantum (whereas in an app startup, a PhD might not matter). Also, being on a government roadmap (e.g., selected for a national quantum initiative program) adds credibility. So I find myself reading research group websites and government press releases as part of diligence – very different from reading app store reviews or sales pipeline data in a normal startup. This also affects standards and compliance, as discussed: aligning with government standards (like NIST PQC) can make or break a quantum security startup. Traditional tech might worry about compliance too (health tech with FDA, etc.), but for quantum it’s more about aligning with nascent standards to ensure future viability.
  • Exit landscape uncertainty: In classical VC, you kind of know who might acquire a given startup or how it could IPO. In quantum, the potential acquirers might be tech giants or defense contractors or even governments – it’s less charted. Diligence has to consider scenarios like, if this works, is it a standalone company or an acquisition target? Many big firms (IBM, Google, Honeywell, etc.) have their own quantum efforts, so will they buy or bury others? We incorporate that thinking: if the startup’s tech is good, could it be a pick-up for a Google, etc.? This isn’t a “diligence” item per se, but it influences how we assess the risk/reward. Traditional startup diligence might focus on growth to IPO; quantum diligence often includes positioning for strategic partnership or acquisition.

One concrete example of difference: In a SaaS startup diligence, I might spend 80% of time on market and customer calls, 20% on tech. In a quantum hardware startup, it’s reversed: easily 80% on tech validation and roadmap, 20% on market (because if the tech doesn’t work, market doesn’t matter; and if it does, the market is assumed large or evolving). The technical unknowns dominate the discussion. Market risk is still there (maybe quantum solutions won’t be needed if classical keeps improving), but tech risk is paramount. Thus, we involve scientists early, we do prototypes or simulations as part of diligence, which is rare in other fields.

Another difference: talent evaluation. In a normal startup, you evaluate if the team can build a business, their experience, etc. In quantum, you also scrutinize the scientific talent. Does the team include leaders from top labs? Are they missing a key expertise (like a cryogenic engineer, microwave engineer, etc.)? A quantum startup might need a mix of PhDs and seasoned engineers that is atypical. I often have a scientist co-review the resumes and publications of the technical team as part of diligence. It’s almost like hiring due diligence – you want to ensure the team is technically capable of delivering on the science. In other tech startups, you rarely go that deep into technical CVs (except maybe for AI startups, one might check research credentials similarly).

Finally, risk tolerance is different. Quantum tech can feel like a gamble, because if it hits, it changes the world (huge reward), but it might also prove impossible or decades away (huge risk). It reminds me of biotech drug investing – binary outcomes after long R&D. Traditional tech has more gradations (if the product fails, maybe the company can pivot to something adjacent). Quantum companies might not have an easy plan B if, say, their approach to qubits fails. Due diligence has to assess that “Plan B” aspect too – does the company have alternatives if initial approach falters? E.g., are they researching two types of qubits in parallel? Or is their IP broad enough to apply to something else? We usually ask that. In normal tech, pivoting is common and easier (a shopping app can pivot to a social app perhaps). In quantum, a pivot might mean starting over on a new modality – almost a new company.

In summary, quantum due diligence is more akin to scientific research review + deep tech risk management, whereas typical tech due diligence is more market and execution focused. It demands more technical acumen, patience, and creative deal structuring (milestones, etc.). The reward is potentially game-changing technology, but as investors we have to adapt our approach to responsibly foster it. A colleague of mine put it well: “In quantum, you bet on the team’s understanding of physics and your own understanding of the team; in software, you bet on the team’s understanding of users and your own understanding of spreadsheets.” There’s truth in that.

When (and How) to Engage Independent Experts

One of the smartest moves I’ve learned in quantum diligence is knowing when to call in reinforcements. As detailed as my personal field guide is, I’m not too proud to admit when I need an expert’s eyes. Engaging independent subject-matter experts (SMEs) can save an investor or a corporate partner from major missteps. But it’s also important how you do it – you want unbiased insight, not just a rubber stamp or, on the flip side, not an overly academic critique detached from startup realities.

When to engage an expert:

  • At the technical due diligence stage, for sure. If you’re evaluating a quantum deal and you don’t have deep expertise on staff, bring in an external quantum engineer or physicist early in the process. Ideally, once you’ve done an initial screen (so you know the basics and think it’s worth looking deeper), but before finalizing the investment memo. For venture funds, this might mean having a go-to “quantum advisor” who can be pulled in for a few hours or days per deal. I routinely loop in a professor friend or a trusted industry researcher after I get the initial data from the startup but before I make a final recommendation. They often see things I don’t.
  • When a claim is above your comfort level. Even if you have a technical background, maybe the startup’s approach is in a sub-field you aren’t an expert in (e.g. you know superconducting qubits but this is a photonic quantum networking play – find a photonics expert). Or if a claim sounds “too good to be true,” definitely have an independent party evaluate it. For instance, when a startup claimed a revolutionary algorithm, I got an academic quantum algorithm researcher to review their method – he found an error in their complexity analysis. The earlier you catch that, the better.
  • When making a major investment or partnership decision (i.e., gating decision). Say you’ve invested in a quantum startup and they’re approaching a critical milestone – it can be wise to have an external audit at that point. For example, if a milestone is “demonstrate 50 qubit device,” consider hiring an outside lab or expert to validate the device’s performance. Some corporate strategics do this: before a big contract, they ask a national lab to test the vendor’s tech. It’s not always feasible (some startups might resist external testing until trust is built), but even a paper review or on-site visit by an expert can add assurance.
  • When red flags appear. If during your own diligence you hit something confusing or concerning (data that doesn’t add up, or a concept you’re struggling to verify), pause and get an expert. I did this when a certain topological qubit startup presented data that looked odd to me – I called a professor who immediately said “that kind of graph usually means they’re averaging over many failed runs, hiding variance.” We then pressed the startup on that specific point. So whenever I feel out of my depth, I treat it as a signal to phone a friend.
  • Periodically for monitoring. If you’re on the board of a quantum company, bringing an independent expert to periodically review progress can be invaluable. It’s similar to how biotech VCs have a Scientific Advisory Board keep tabs on R&D. In one of my investments, we arranged an annual review where an outside domain expert attends a board meeting to hear the tech update and give feedback. It helps the founders too – they get an outside perspective that isn’t just investor pressure, but technical mentorship. Many quantum startups actually welcome this, as long as IP is protected, because they often come from academia where peer feedback is normal.

How to engage experts effectively:

  • Choose the right expert. Sounds obvious, but match the expertise to the need. If it’s quantum error correction, get a quantum error correction scientist, not just any physicist. Ideally find someone reputable who also understands the constraints of startup environments (not someone who will insist on perfection that’s unrealistic). It also must be someone independent – not someone with a competing agenda or who’s tied to the startup. Sometimes big company researchers can do this as consultants, or professors (with appropriate conflicts disclosure and NDAs). There are even consulting firms (like the one at NIST alumnus or so) focusing on quantum due diligence – they assemble teams of experts. That’s an option if you need a comprehensive study.
  • NDA and scope clarity. Typically, you’ll have the expert sign an NDA if they are seeing anything confidential from the startup. Define the scope: e.g., “review technical documents X and Y, have one call with the startup’s CTO, and provide an opinion on claim Z.” Keep it focused so it doesn’t balloon into a research project (unless you need a full technical audit). Pay them for their time; many are happy to consult for a reasonable fee or honorarium. I usually frame it as, “We want your honest assessment of the feasibility and any potential show-stoppers.” Encourage candor.
  • Include the startup in the process where appropriate. This is important – don’t make the startup feel ambushed or undermined by secret experts lurking about. I usually tell the founders, “We love what we see, and we’re bringing in an external expert just to validate and help us understand deeper. We’d like you to present to them or share data with them.” Good founders actually appreciate that (if they have nothing to hide). If a startup objects strongly to any outside evaluation, that’s a warning sign. One can accommodate if secrecy is a concern (maybe the expert only sees public info or maybe it happens after an initial term sheet), but an outright refusal is a flag.
  • Use multiple experts if needed for balance. Quantum is full of opinions; one academic might be overly pessimistic (“It’ll never work”), another overly optimistic (especially if they just love the idea). If it’s a huge investment, I sometimes get two independent views to compare. Also, if one is an academic, I might also talk to an industry engineer to see different perspectives. Then weigh the consensus.
  • Heed their warnings, but make your own decision. An expert will rarely say “invest” or “don’t invest” – they’ll highlight risks or unknowns. I listen carefully if an expert says “this group is known for exaggerating” or “their result violates what X paper showed.” That typically leads me to dig more or pass. But sometimes experts are skeptical of any commercial effort (especially academics who might say “It’s too early, no one can do this yet,” yet several companies are trying). So I use their input to inform, but I still consider the broader picture (team grit, etc.). It’s like having a specialist doctor consult on a patient – they give a detailed diagnosis, but the overall treatment plan is up to the primary doctor (investor in this analogy) who knows all factors.
  • Engage them as ongoing advisors if valuable. If you find a great expert, bring them closer. We’ve made some experts formal advisors to our fund or the company. That can help with recruiting (the startup can tout that Dr. So-and-so is advising), and it keeps a channel for future questions. Just manage conflicts (if they later want to invest or join the startup, ensure all sides are cool with that and no undue influence was exerted initially).

One anecdote: In one due diligence, the startup had a new approach to quantum random number generation. I got an expert who literally wrote a paper debunking some quantum RNG claims in the past. He analyzed their setup and found it sound, except he pointed out a subtle assumption about their source that could be improved. We ended up investing, and as part of the deal we suggested they bring that expert on as an advisor to help strengthen that part. They did, and it improved the tech and also reassured other investors. That’s a win-win of expert involvement.

Another case: a quantum computing startup’s CEO made very bold timeline claims. We spoke to a couple of well-known professors in quantum computing (under NDA) and they both said, in polite terms, that the timeline was wildly optimistic given the approach. They enumerated technical challenges that would likely slow things down. We confronted the CEO with this (without naming experts) and he conceded those challenges were real, but he was being optimistic for fundraising. We adjusted our expectations (and valuation) accordingly, and suggested he temper messaging to avoid credibility loss. He did so in later conversations. Thus, experts not only saved us from believing a rosy timeline, but helped the company not over-promise what it couldn’t deliver.

In summary, independent SMEs are an integral part of quantum due diligence. Use them as truth-tellers who can validate or challenge the technical story. Engage them early, often, and respectfully. I’d say any significant quantum investment without an outside expert review is like sailing without a compass – you might get there, but you’re relying too much on luck. This field is just too complex to go it alone, no matter how smart my team is. Recognizing that and budgeting time and resources for expert input is one of the best practices we’ve adopted. It turns due diligence from a daunting task into a collaborative one – you essentially crowdsource wisdom to make a better decision.

Conclusion: Navigating Quantum with Eyes Wide Open

Walking away from a quantum pitch or diligence deep-dive, I often feel a mix of exhilaration and caution. Exhilaration because you glimpse a future where these bizarre, powerful machines solve problems we once thought unsolvable. Caution because between here and that future lies a minefield of technical hurdles, hype traps, and execution risks. As a “quantum-aware” investor/advisor, my role is to navigate that minefield with clear eyes and a steady hand – neither blinded by the sparkle of qubit counts nor deterred by the naysayers who claim it’s all vapor.

To all VCs, corporate innovation leads, and tech strategists reading this field guide, I hope it has armed you with a healthy diligence mindset. Quantum tech is not magic – it obeys physics, it demands engineering discipline, and it unfolds on its own timeline. Our job is to peel back the marketing veneer and see what’s really there. The tools and tactics I shared – from understanding coherence vs. fidelity, to applying the five-lens framework, to grilling claims with independent verification and quick tests – are about instilling rigor into that process.

A few actionable takeaways to remember:

  • Always define the terms: Don’t let jargon like “quantum advantage” or “logical qubit” slide by without pinning down what it means in that context. Your first lens is clarity.
  • Insist on evidence and transparency: If a startup makes a bold claim, look for a paper, a benchmark, a demo – something tangible. Extraordinary claims need extraordinary evidence, period.
  • Use the Five Lenses in your evaluation: Structure your thinking around physics, engineering, manufacturability, standards, and commercial reality. It ensures you don’t overlook a critical dimension (like can they actually build this at scale, or do they have any idea how to sell it?).
  • Map claims to proof: For every big statement, jot down what proof you’d want. If you can’t find it, ask for it. If it doesn’t exist, consider what that means (maybe the tech is too early, or the team hasn’t validated something crucial).
  • Score and prioritize risks: It’s fine that many things are uncertain in quantum – the key is to know which uncertainties are lethal versus manageable. Use RAG or a similar system to focus on the show-stoppers.
  • Don’t skip quick diligence hacks: Whether it’s a one-week test or a call with an expert, these can reveal issues in time to avoid a bad deal (or to renegotiate it).
  • Mind the differences: Approach quantum deals with a different lens than a typical software deal. Build in more buffer (time, money, patience) for the unknown unknowns. Plan milestones and gates.
  • Engage experts: Have a network of trusted quantum experts to sanity-check things. They are your secret weapon in due diligence – use them.
  • Be ready to walk away or say not yet: If too many things don’t add up (multiple Reds in your scorecard with no mitigation), it’s okay to pass or wait for further progress. FOMO is a dangerous impulse in a field where real progress is often slow and steady behind the scenes, not flashy overnight jumps.
  • Stay educated: The quantum field evolves monthly with new papers and benchmarks. Keep up with key developments (e.g., track the top hardware players’ progress, NIST standards in PQC, etc.). Your internal knowledge base needs regular quantum updates to remain relevant.

I’ll also emphasize the positive: rigorous due diligence isn’t just about avoiding risk, it’s about spotting true gems. There are startups out there quietly hitting technical milestones, not screaming in the press, but showing real promise. A savvy diligence process can uncover those and give you the conviction to invest when others hesitate. I’ve seen this with a company that many dismissed (due to lack of hype) but our homework revealed their prototype was outperforming much larger competitors’ devices on certain benchmarks. We invested, and that company is now one of the rising stars. So diligence can give you an edge, both defensive and offensive.

As quantum technology moves from laboratory curiosity to commercial reality, the diligence playbook will continue to mature. Perhaps in a decade we’ll have industry-standard metrics and straightforward data rooms with quantum KPIs (one can dream!). Until then, we navigate with our blend of scientific scrutiny and business acumen.

Let me end on an actionable guiding question, a sort of mantra that I carry into every quantum pitch meeting or call:

Am I convinced by evidence, or merely by the story?

If you consistently ask yourself this – and push to turn story into evidence – you’ll greatly increase your chances of picking the real winners in quantum and filtering out the mirages.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap