Quantum Computing Companies

Quantum Circuits Inc (QCI)

(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)

Introduction

Quantum Circuits, Inc. (QCI) is a Yale University spin-out that has pioneered a novel approach to superconducting quantum computing focused on hardware-efficient error correction. Co-founded in 2017 by leading Yale physicists (including Robert Schoelkopf, Michel Devoret, and Luigi Frunzio), QCI’s mission is to accelerate the path to fault-tolerant quantum computers by “correcting first, then scaling”.

Unlike many competitors in the superconducting qubit arena that emphasize scaling up qubit count, QCI’s strategy centers on a dual-rail cavity qubit architecture with built-in error detection at the hardware level. This distinctive hardware platform, integrating 3D microwave resonators (cavities) and transmon circuits, aims to deliver more reliable qubits with higher effective fidelity, thereby enabling consistent and repeatable quantum operations. In the broader quantum landscape, QCI stands out as a full-stack provider prioritizing fault tolerance from the outset, in contrast to the conventional NISQ-era approach of adding qubits first and worrying about errors later.

Milestones & Roadmap

Founding and Vision (2017-2019): QCI was founded in late 2017, fueled by a $18 million Series A led by Canaan and Sequoia Capital. From inception, the company’s vision was to develop a “proprietary approach for error-correcting quantum bits that is hardware-efficient and requires significantly less redundancy” than conventional schemes. The founding team – renowned for co-inventing the transmon qubit and other circuit QED breakthroughs – set out to build a modular quantum computer with an architecture that could inherently detect/correct errors and scale more gracefully. By early 2018, QCI had established a dedicated lab in New Haven and assembled ~20 engineers and scientists to begin constructing this system.

Prototype Development and Key Demonstrations: Over the next few years, QCI (in collaboration with Yale research groups) validated the core building block of its architecture: the dual-rail cavity qubit. In 2023, researchers demonstrated erasure error detection on a single dual-rail qubit, marking a critical proof-of-concept for QCI’s error-centric design. In this experiment, a qubit was encoded in a pair of superconducting resonators (“dual rails”) and an auxiliary transmon was used to detect photon loss events mid-circuit – effectively catching the qubit’s dominant error in real time. The results showed that the dual-rail qubit maintained a highly favorable error profile (with bit-flip errors virtually eliminated and photon-loss errors detectable with high fidelity). This achievement, published in 2023-24, confirmed that mid-circuit erasure detection is feasible in hardware, a significant milestone on the road to practical quantum error correction. By that time, dual-rail cavity qubits had also demonstrated millisecond-scale coherence and high single-qubit gate fidelities in the lab, giving QCI confidence in the viability of its approach.

Aqumen Full-Stack Rollout (2024): In mid-2024, QCI began unveiling its full-stack platform, branded “Aqumen.” In August 2024 the company launched the Aqumen Cloud service, which includes a custom software development kit (SDK) and high-performance simulator (AquSim) to let users design and test quantum algorithms with built-in error detection logic. This cloud framework foreshadowed the release of QCI’s first quantum processing unit later that year. In November 2024, QCI announced its Aqumen Seeker QPU – an 8-qubit quantum processor built on dual-rail cavity technology. The Seeker contains eight dual-rail cavity qubits (DRQs) in a nearest-neighbor coupling topology, and it is the industry’s first QPU with integrated hardware error detection on every qubit. This system “rounds out [QCI’s] full-stack quantum computing system,” providing cloud access to real hardware for early users in the company’s Alpha program. Notably, QCI raised a $60 million Series B investment in parallel with this launch (led by ARCH, F-Prime, and Sequoia) to support commercialization – a strong vote of confidence in their technical progress. By year’s end, enterprise partners began running experiments on the Seeker via Aqumen Cloud, exploring how error-detection features can improve algorithm reliability.

Technical Progress in 2025: The focus in 2025 has been on refining multi-qubit operations and laying the groundwork for scalable error correction. In the first half of 2025, QCI researchers achieved a major technical breakthrough: a high-fidelity entangling gate between two dual-rail qubits that preserves the qubits’ bias and error-detectability. This two-qubit gate (a controlled-Z implemented via a “SWAP-Wait-SWAP” sequence) uses a parametric transmon coupler to mediate interactions, and importantly maintains the favorable error hierarchy – measured photon-loss (erasure) rates below 1% per CZ gate, with residual (undetected) errors as low as ~0.1%. Bit-flip errors were almost non-existent (on the order of a few ppm) in this entangling operation. Such performance is well beyond the threshold of common quantum error-correcting codes, indicating that QCI’s approach can support significant logical error suppression. These results, reported in 2025, represent an essential milestone: QCI has demonstrated that all key primitives – state prep, single-qubit gates, readout, and two-qubit gates – can be executed on dual-rail qubits with “world-class” fidelity and bias-preserving error behavior.

On the deployment side, QCI has expanded strategic partnerships. A notable collaboration with quantum chemistry startup Algorithmiq was announced, applying QCI’s error-detecting hardware to advanced drug discovery algorithms. QCI also teamed up with NVIDIA and Supermicro to integrate NVIDIA’s Grace Hopper CPU-GPU superchips into QCI’s workflow, enabling accelerated simulation of error-corrected circuits and hybrid classical-quantum algorithms. This partnership, revealed in May 2025, gives QCI on-premises HPC capability to test and optimize quantum error correction codes on their dual-rail architecture at scale. Taken together, 2025’s achievements signal that QCI is moving from demonstrating core technology to scaling it: the company is actively evaluating larger quantum processor designs and more complex error-corrected workflows, leveraging classical co-processing to inform its next hardware iterations.

Roadmap Outlook: As of mid 2025, QCI is poised to articulate a public roadmap concentrating on fault-tolerant scaling. Executives indicate that upcoming generations of QCI systems will feature “bigger and more capable” quantum processors but will not chase qubit count for its own sake. Instead, each new generation is expected to deliver higher logical qubit counts at lower logical error rates, using QCI’s “correct-first, then scale” strategy as the guiding principle.

In practice, this means QCI will likely incrementally increase the number of dual-rail qubits (from the current 8 into the tens of qubits, and beyond), integrating error-correction protocols at small scales before scaling up. QCI’s leadership emphasizes that this measured approach – focusing on qubit quality and error mitigation features – will lead to fault-tolerant systems more efficiently than the brute-force approaches of some competitors. Indeed, one investor noted that QCI’s hardware has “accomplish[ed] milestones that the rest of the industry will struggle to achieve within the decade”. While exact timelines are not public, QCI’s end goal is clear: to deliver a commercially useful, fault-tolerant quantum computer on a faster schedule by virtue of dramatically lower error rates and overhead. The company’s forthcoming roadmap is expected to detail this trajectory, emphasizing practical system size, error-correction efficiency, and real-world algorithm benchmarks over raw qubit numbers.

Focus on Fault Tolerance

Fault tolerance is the core of QCI’s strategy, fundamentally shaping its hardware and software design. QCI explicitly rejects the notion of scaling up noisy qubits first and worrying about errors later; instead, it espouses a “Correct First, Then Scale” philosophyy. This means every level of QCI’s stack – from physical qubit architecture to compiler features – is built to detect and handle errors, with the aim of minimizing the resources needed for full error correction. Key aspects of QCI’s fault-tolerance approach include:

Dual-Rail Qubits with Erasure Detection

QCI’s dual-rail cavity qubit is engineered such that its dominant error mechanism (photon loss) is immediately detectable as an “erasure” event at the hardware level. In practice, each qubit is encoded in two cavities with at most one photon shared between them. If that photon is lost (e.g. due to a cavity decay), the qubit collapses to a known invalid state (neither 0 nor 1) – which registers as a distinctive third measurement outcome (*) indicating an error. This built-in Quantum Error Detection (QED) capability is unique to QCI’s platform. It converts what would be random errors into erasures (errors with known location), which are far easier for quantum error-correcting codes to handle. For example, the threshold error rate for the surface code rises from ~1% to almost 25% when errors are erasures rather than unknown Pauli errors. By making erasures the predominant error type, QCI dramatically relaxes the error rate requirements for fault tolerance. Indeed, in QCI’s dual-rail qubits erasures dominate over dephasing by a factor of 5-10, and bit-flip errors are “several orders of magnitude rarer” than dephasing. This highly biased noise profile (with only one main error channel) can be exploited by high-threshold codes (e.g. erasure-tailored surface codes or XZZX codes) for more efficient error correction.

“Detect and Correct” Workflow

QCI envisions a fault-tolerant quantum computer where error detection is seamlessly intertwined with computation. Its system supports mid-circuit error checks and real-time error handling instructions in a way that no other current platform does. Users can insert custom erasure detection points in their algorithms – effectively querying whether any qubit has lost its photon – and leverage QCI’s control electronics to respond immediately. If an error is detected during a circuit, several strategies become possible: the computation can branch to a recovery subroutine, a flagged qubit can be reset and reinitialized on the fly, or the event can simply be recorded for post-processing (post-selection). QCI’s control stack includes a Real-Time Control Flow (RTCF) engine that supports conditional branching, fast feedback, and even classical arithmetic during quantum execution. This enables, for instance, feed-forward error correction steps or dynamic rerouting of an algorithm when an error is flagged.

In near-term usage, QCI suggests users employ Error Detection Handling (EDH) features such as mid-circuit erasure flags and end-of-run error reports to improve algorithm reliability via post-selection. By discarding runs in which any qubit produced an * (indicating an erasure occurred) – or by computationally correcting those errors – one can obtain output distributions with much higher fidelity. While post-selection is not a scalable long-term solution, it is a valuable interim tool for “error-aware” algorithm development.

In the long term, the same hardware error flags will be inputs to quantum error-correcting codes, allowing those codes to focus only on correcting residual Pauli errors while the erasures are identified and handled separately. QCI’s philosophy is that by actively managing errors as they occur (through detection and fast feedback) rather than passively tolerating them, one can reach the fault-tolerance regime with far fewer qubits.

Bias-Preserving Operations and High Thresholds

An important nuance of QCI’s fault-tolerance approach is preserving the structure of errors through all operations. It’s not enough to have qubits that rarely flip – the gates between them must also respect the bias (i.e. not introduce new error channels). QCI’s engineers have devoted effort to designing entangling gates that do not spoil the favorable error statistics of the dual-rail qubits. The recently demonstrated two-qubit CZ gate is a prime example: it is an excitation-preserving gate (no photons are created or destroyed during the interaction), which means a photon loss in one qubit remains an erasure error (and does not cause, say, a random flip on the partner qubit). In tests, this gate achieved <1% erasure probability and only ~0.1% decoherence per operation after conditioning on no erasures. Moreover, it introduced an asymmetry where any residual dephasing errors occurred mostly on the control qubit, not the target – a feature that can be exploited in error-correcting code design (e.g. using the less-noisy qubit for certain syndrome measurements).

The bottom line is that QCI’s hardware strives to preserve a high degree of error bias throughout computation, which allows error correction to reach threshold with lower overhead. Simulations indicate that with erasure-type errors and biased noise, even modest-size codes can suppress errors rapidly. QCI’s own team has noted that the performance levels they’ve demonstrated are “well past predicted surface code thresholds”, suggesting that each additional increase in code distance should exponentially reduce logical error rates. This gives confidence that a fault-tolerant architecture built on these qubits can scale without an inordinate qubit count.

Reduced Overhead and Efficiency Focus

By attacking the error problem at the root, QCI expects to dramatically cut down the number of physical qubits required per logical qubit. In conventional transmon-based schemes, estimates for a single logical qubit (with, say, ~10-9 error) often run into hundreds of physical qubits when using surface codes at physical error rates ~1%. QCI projects that with error-detecting dual-rail qubits, the overhead can be an order of magnitude lower – on the order of 10-20 physical qubits per logical qubit, instead of ~200 in a comparable non-erasure scheme. This claim is consistent with theoretical analyses of erasure codes and with QCI’s own experimental data. Investor materials and interviews highlight that fewer resources are needed in QCI’s approach: “about 10-20 physical qubits instead of 200 for each error-corrected logical qubit” in some setups. Achieving such low overhead would be game-changing – it means a small-scale device could encode a handful of logical qubits, enabling QCI to demonstrate truly error-corrected computation much sooner than a brute-force system. This philosophy was succinctly captured by QCI’s Chief Scientist, Rob Schoelkopf: “We are achieving better results with fewer qubits… [our] efficient approach accelerates the path to fault-tolerant, commercial-ready quantum computing.” In essence, QCI is attempting to “run the right race” – focusing on qubit quality and error resilience rather than sheer quantity.

The ultimate goal is a logical qubit with error rates low enough for arbitrary-length computation, built from as few physical qubits as possible. All of QCI’s unique design choices (cavities, dual-rail encoding, erasure detection, real-time feedback) serve this goal of minimizing overhead on the road to fault tolerance.


In summary, QCI’s fault-tolerance strategy can be viewed as “fault prevention” and mitigation at the qubit level. By giving each qubit some self-awareness of its error state and by architecting operations that respect the error structure, they alleviate much of the burden that would otherwise fall on higher-level error-correcting codes.

This strategy is already demonstrating tangible benefits: in trials, using QCI’s error detection, quantum algorithms have shown significantly boosted fidelity when ignoring runs where errors were flagged. As QCI scales up, the same capabilities lay an essential foundation for implementing full error correction (e.g. surface code cycles with erasure syndromes) on a smaller fabric of qubits. It is a pragmatic path to fault tolerance, emphasizing “detect, correct, then scale” – a path QCI argues will reach useful quantum computing faster than the conventional qubit arms race.

CRQC Implications

One important lens to evaluate QCI’s roadmap is cryptographically relevant quantum computing (CRQC) – the scale at which a quantum computer could break modern cryptography (e.g. factoring 2048-bit RSA) or perform similarly classically intractable tasks. Achieving CRQC reliably is generally seen as the endgame of fault-tolerant quantum computing, often estimated to require on the order of millions of physical qubits with today’s error rates. The question is how QCI’s approach might alter the timeline or requirements for CRQC.

Alignment with Industry Trajectories

Many major players (IBM, Google, etc.) have published aggressive roadmaps aiming for >1 million physical qubits and thousands of logical qubits by the later 2020s or 2030s – figures deemed necessary for running Shor’s algorithm on RSA-sized integers or other cryptographically relevant problems. These roadmaps implicitly assume conventional error correction overheads, where each logical qubit might consume 1,000 or more physical qubits to reach the requisite error rates. QCI’s plan deviates by targeting far fewer physical qubits per logical qubit (as discussed above), which in principle means that a CRQC-scale machine could be achieved with an order of magnitude fewer physical qubits. According to QCI, its dual-rail architecture “reduces the required overhead by at least an order of magnitude, creating a pathway toward practical systems based on tens of thousands of qubits rather than millions.”

In concrete terms, if a conventional approach might need ~1 million physical qubits for a full-break RSA machine, QCI’s approach could aim for ~100k physical qubits for the same task. This is a profound difference: 100,000 qubits might be feasible to engineer within a decade (with concentrated effort and assuming current scaling trends), whereas a million-plus qubit machine is a more distant prospect.

Accelerated Timeline Potential

By focusing on error suppression now, QCI could reach the fault-tolerant threshold earlier, albeit on smaller processors. The moment a single logical qubit exceeds “break-even” (meaning it lives longer with error correction than without), a new era begins – and QCI is targeting that inflection point aggressively. The company’s latest results already meet the criteria for logical qubits to outperform physical ones when using erasure-aware codes. If QCI can, in the next couple of years, demonstrate a logical qubit with substantially improved coherence (e.g. using a small distance erasure code on ~10-20 physical qubits), it will mark one of the first fault-tolerant qubit implementations in any platform. Such a milestone would indicate that scaling up that architecture is mostly an engineering challenge, not a theoretical one, and it could hasten the timeline for running quantum algorithms that threaten cryptographic protocols. In investor statements, QCI’s hardware advances have been touted as “chart[ing] the clearest path to realizing the promise of quantum computing” and potentially winning the race to error-corrected systems ahead of others. While these statements are optimistic, they reflect a view that QCI might deliver useful fault-tolerant qubits on a timescale of a few years, rather than a decade.

Resource Estimates for CRQC

To break RSA-2048 via Shor’s algorithm, estimates often call for on the order of a few thousand logical qubits with error-corrected gate depths on the order of billions. Even with QCI’s error detection reducing overhead, CRQC is not around the corner – it will still require scaling their technology by several orders of magnitude from today’s 8-qubit device.

However, the critical point is scaling quality vs quantity. QCI’s view is that the industry’s qubit-count “arms race” is a misleading metric; one must consider how many usable (logical) qubits can be obtained and at what error rate. A machine with, say, 100,000 physical dual-rail qubits might yield a few hundred to a thousand logical qubits (if 10-100 physical per logical in various code layers), which could be sufficient for CRQC-level tasks. Competing architectures might need an order more physical qubits to get the same logical qubit count.

Thus, QCI’s roadmap – if realized – implies that cryptographically relevant quantum computing could be achieved with a smaller and possibly sooner-to-be-built device than many currently assume. One concrete implication is on timelines suggested by government and industry experts: if the threshold for breaking RSA is, say, ~2030 under the assumption of million-qubit machines, a successful QCI approach might pull that timeline forward if a ~100k-scale machine can be built by the late 2020s.

It’s worth noting that QCI has the backing of In-Q-Tel (the strategic investor for U.S. intelligence), indicating interest in its potential for national security applications like cryptanalysis.

Caveats and Unknowns

It must be emphasized that scaling to even tens of thousands of physical qubits is a formidable challenge (discussed more in the Challenges section). QCI’s current achievements, while impressive, are on the order of 10 qubits. Extrapolating to CRQC-level hardware requires confidence that no new error modes or bottlenecks emerge when thousands of cavities and couplers operate together. Furthermore, algorithms like Shor’s are long and complex; they will demand not just high qubit counts but also high clock speed and parallelism. Here, QCI’s use of superconducting tech is a plus – gates are fast (tens of nanoseconds) and their architecture could in principle support rapid parallel operations with classical feedback in between. QCI has noted that superconducting approaches are “fast… maintaining this speed while enhancing reliability… is where [our] cavity approach comes in”. The company is already thinking about integration with HPC and AI to handle the heavy classical processing in quantum error correction loops, which will be crucial for any CRQC system.


In summary, QCI’s hardware trajectory appears well-aligned with the requirements of CRQC in the sense that it directly targets the primary obstacle (error correction). If QCI meets its goals, it could significantly reduce the qubit threshold for cryptographic breakthroughs, potentially reaching that capability with a machine of tens of thousands of qubits rather than millions. This efficiency could translate to an earlier realization of cryptographically relevant quantum computing, assuming the engineering challenges of building such a system are overcome. In the meantime, QCI’s focus on delivering small-scale fault tolerance will likely provide valuable stepping stones: for instance, early demonstrations of error-corrected algorithms (like chemistry simulations or simpler cryptographic tasks) on their 8-qubit and next-generation devices would show the world that qubit errors can be tamed in hardware. Each such milestone – e.g. executing a quantum algorithm with realtime error detection and seeing superior performance – builds confidence that the leap to CRQC, while still requiring years of R&D, is on a track that might be shortened by innovation. Overall, QCI’s roadmap doesn’t just align with CRQC timelines; it seeks to reshape them by changing the efficiency curve of quantum error correction.

Modality & Strengths/Trade-offs

QCI’s hardware modality is a superconducting dual-rail cavity qubit architecture – a departure from the 2D transmon qubits used by many quantum computing efforts. This approach leverages the advantages of 3D microwave resonators (cavities) for storing quantum information, while still using superconducting circuits (transmons and couplers) for control and readout. Because I haven’t previously written about this particular modality, I will spend some more time here to dive into the technical details of this modality, highlighting its strengths and inherent trade-offs:

Qubit Encoding – Dual Cavities (Dual-Rail)

Each QCI qubit is not a single circuit element, but rather an encoded qubit spread across two high-Q superconducting resonators (often cylindrical 3D cavities). The logical states |0⟩ and |1⟩ are represented by a single microwave photon being in one cavity or the other. For example, one cavity carrying one photon (and the partner empty) might represent logical 0, whereas the opposite occupancy represents logical 1.

Crucially, the dual-rail qubit always has exactly one photon in the two-cavity system (neglecting errors) – this symmetry is what allows photon loss to be detected as an invalid (no-photon) state. An auxiliary superconducting qubit (a transmon) is typically coupled to one of the cavities to facilitate state initialization and measurement. The two cavities are also connected by a nonlinear coupler (another Josephson element, such as a flux-tunable SQUID) which enables photon transfer between them. In current prototypes, the cavities are 3D coaxial resonators about 5 mm in diameter (for reference, these are significantly larger than on-chip resonators, but QCI plans to shrink them to micro-fabricated scales in future generations).

The decision to use 3D cavities stems from their exceptional coherence properties: superconducting cavities can store photons with lifetimes on the order of hundreds of microseconds to milliseconds, far exceeding the coherence times of typical transmon qubits. This high coherence is one pillar of QCI’s approach – the longer a physical qubit can maintain quantum information, the fewer error correction cycles are needed.

Control Mechanisms

Despite the unusual qubit encoding, QCI’s dual-rail platform supports a universal gate set through clever control schemes:

Single-Qubit Gates

Operations like X, Y, Z rotations on a dual-rail qubit are implemented by driving the coupler between the two cavities. By applying an RF flux pulse that induces a beam-splitter interaction between the cavities, QCI can enact an arbitrary rotation in the single-excitation subspace. A full photon transfer from one cavity to the other implements an X gate (bit-flip), since it swaps the |0⟩ and |1⟩ logical assignments. Partial swaps (e.g. 50% transfer) create superposition states, achieving arbitrary single-qubit rotations. This is analogous to beam-splitter operations in optical dual-rail qubits. Because these operations occur via a resonant exchange, they can be very fast – on the order of tens of nanoseconds – and have been demonstrated with high fidelity (error rates in the 10-3 range or better).

Importantly, these photon-exchange gates preserve the total photon number, meaning they do not take the qubit out of its code space aside from a lost-photon error. That helps ensure that a single-qubit gate doesn’t introduce leakage errors.

Two-Qubit Gates

Entangling two dual-rail qubits is more complex but follows a similar philosophy of photon-conserving interactions. In QCI’s 8-qubit Seeker processor, qubits are arranged in a nearest-neighbor topology, so each cavity is potentially coupled to a neighbor’s cavity through some tunable bus. The recent controlled-Z gate implementation gives insight into the mechanism: To entangle qubit A and qubit B, QCI uses a parametric coupler transmon that connects one cavity of A with one cavity of B. The CZ gate is realized by a sequence: swap the excitation from A’s cavity into the coupler, wait briefly to accumulate a conditional phase on B’s cavity via the coupler’s dispersive shift, then swap the excitation back to A’s cavity. This “swap-wait-swap” sequence effectively produces a CZ between the dual-rail qubits without ever populating a cavity with more than one photon. The whole operation was accomplished in the QCI experiment in ~nanoseconds timescale and achieved interleaved two-qubit gate fidelity on the order of 99.9% (after accounting for detected erasures).

More generally, two-qubit gates on this platform can leverage dispersive couplings and flux modulation to create beam-splitter interactions between any two resonators, mediated by couplers. The design uses multiple SQUID-based couplers to give flexibility in choosing which cavities to entangle. While the hardware wiring is more elaborate than a simple fixed capacitive coupling on a chip, the outcome is a high degree of control: QCI can selectively entangle nearest neighbors with minimal crosstalk, and do so in a way that preserves the photon-number parity (hence preserving error detectability).

The trade-off is that each gate involves intermediate steps (swapping into/out of a coupler) that need calibration to ensure phase alignment. The experimental evidence so far suggests these parametric gates can be tuned to a very low error budget (with undetected error ~0.1% per gate).

Measurement and Reset

Reading out a dual-rail qubit is accomplished via the auxiliary transmon and its readout resonator. QCI uses the transmon as a quantum sensor to distinguish the states of the cavities. In one approach, the transmon frequency will shift depending on the photon number in the coupled cavity; by performing a joint measurement of the transmon, one can infer if the photon was in the “0” cavity, “1” cavity, or neither (the erasure case). The company reported “world-class state preparation and measurement” performance on their Seeker QPU, implying readout fidelities approaching the best in superconducting qubits (which is >99%). If a measurement finds an erasure (*), the system can in real-time flag that and potentially reset the qubit (by re-injecting a photon) to continue the computation, thanks to the real-time control flow. Fast qubit reuse after measurement is one of the toolkit features QCI highlights (e.g. for mid-circuit resets in ancilla qubits).

One challenge is that measuring the transmon to get the qubit result will collapse the qubit, so multi-qubit parity checks (syndrome measurements) would require more elaborate sequences or additional ancillae. However, given the flexible classical control, QCI could perform syndrome extraction by measuring patterns of erasures and conventional outcomes across multiple qubits.

Architectural Differentiators and Strengths

High Coherence & Low Native Error Rates

By storing quantum information in microwave cavity modes, QCI benefits from intrinsically long-lived qubits. A well-fabricated superconducting cavity can have a photon lifetime T₁ on the order of 1-10 ms (depending on materials and frequency), which is orders of magnitude longer than a typical transmon’s T₁ (~50-100 µs). Even if practical cavities in the integrated system have somewhat shorter lifetimes (some reported QCI/Yale cavities are in the 0.5-1 ms range), this still yields very low physical error rates per operation. Long T₁ means fewer spontaneous errors during gate operations, and any error is likely to be the single-photon loss kind.

Additionally, cavities are linear resonators (no Josephson junctions inside them), so they are free of 1/f flux noise and other decoherence sources that plague qubits like transmons. This gives them extremely stable frequencies and reduces noise-induced dephasing. Indeed, QCI notes that cavities have a “very favorable error profile” – essentially only one significant error mechanism (loss) – whereas a transmon or other qubit has multiple (energy loss, dephasing from noise, leakage to higher levels, etc.). The result is that the dual-rail cavity qubit can operate with error rates per gate significantly below those of standard superconducting qubits, once erasures are post-selected out. In the demonstrated CZ gate, for instance, the effective error (conditioned on no erasure) was ~0.1%, far better than the ~1% error of a typical CZ on transmons. Even including erasures (which would be corrected by higher-level codes), the total error was ~1%, still on par with the state of the art but with the crucial difference that 90% of those errors announce themselves.

This inherent error robustness is a primary strength – QCI’s qubits are “born fault-tolerant-ready” in a sense, since their error rates are near or below thresholds without massive physical scaling.

Impeccable Isolation and Low Crosstalk

Because QCI’s qubits are spatially separated 3D components rather than densely packed on a chip, they enjoy excellent isolation from each other and from spurious environmental coupling. QCI likens cavities to isolated elevators: put two photons in two separate cavities and “they don’t start affecting each other in uncontrolled ways”. This contrasts with planar chips where dozens or hundreds of microwave elements sit in close proximity, which can lead to cross-coupling, cross-talk, and frequency crowding (qubits perturbing each other’s frequencies, etc.). The 3D architecture inherently mitigates these issues – cavities can be designed with very low external coupling except through intended channels.

Mechanical stability is also high; vibrations or dielectric loss in a chip can be more problematic than in a machined cavity. By maintaining qubits as more isolated modules, QCI can reduce unintended interactions and thus reduce errors from those sources. This is important for scaling: a dense chip of 1000 qubits may suffer from significant coherent errors due to cross-talk, whereas 1000 well-isolated cavity qubits could be easier to tune and calibrate independently.

The trade-off is physical size, but QCI is betting that improved reliability per qubit outweighs complications of a larger footprint.

Single Error Type (Erasures) – Simpler Error Correction

As discussed, having one dominant error that is also detectable simplifies the entire error correction problem. In QCI’s words, “having one dominant source of error that is detectable reduces the hurdle to scalable error correction”, making performance requirements more forgiving and reducing hardware overhead. This is a profound architectural differentiator: most other modalities (ions, spins, planar transmons) must grapple with multiple error channels that all must be suppressed or corrected. QCI’s dual-rail qubit effectively funnels all significant error processes into one lane, which can be addressed with one kind of check (photon presence/absence).

The benefit is twofold: first, engineers can focus on optimizing against that one error (e.g. improving cavity Q to reduce loss, which is a well-understood materials problem), and second, error-correcting codes can be optimized to assume erasures, taking advantage of the high threshold and simpler decoding. This strategic narrowing of error channels gives QCI a potential speed lane to fault tolerance.

Fast Gates and Compatibility with Classical Compute

Being a superconducting circuit platform, QCI’s modality retains the high gate speeds (GHz-range control) that are a hallmark of superconducting qubits. Single- and two-qubit operations in tens of nanoseconds mean the processor can perform many operations within coherence times, and also that quantum error correction cycles can be executed quickly (important for keeping up with error occurrence).

Moreover, QCI has heavily invested in a classical co-processor infrastructure (custom electronics and integration with NVIDIA’s CUDA-Q platform). This means their system is geared for real-time classical-quantum integration, a necessity for active error correction and for algorithms like variational or adaptive circuits. Features like conditional branching and loop execution directly in the QPU’s control (via QCI’s QCDL programming interface) allow for a tight interplay between classical and quantum processing. In sum, QCI’s architecture can exploit the rapid feedback loop: measure an error, process that info classically, apply a correction – all within a fraction of a microsecond. This capability is not universally available in other platforms (for example, some cloud quantum systems do not yet allow mid-circuit measurement and feed-forward, or do so with large latencies). QCI’s design thus stands out in offering a fully error-aware programming model down to the hardware level, which is a significant strength for near-term experiments and a necessity for long-term fault tolerance.

Trade-offs and Challenges

Despite its many advantages, QCI’s modality comes with trade-offs and open challenges:

Hardware Complexity & Footprint

A dual-rail qubit is a composite object requiring multiple components: two high-Q cavities, at least one coupler, one readout transmon, and associated wiring (drive lines, flux lines, readout resonators). For example, an experimental two-qubit module required three coupler transmons and two ancilla transmons in addition to four cavities. Scaling this up means a large number of physical devices and interconnections. The current 8-qubit Seeker likely contains on the order of 8×2 = 16 cavities and dozens of Josephson junction-based circuits (couplers and ancillas). This is far more components than an 8-transmon chip.

The spatial volume needed for cavities also makes the approach less dense. QCI acknowledges this and has indicated plans to shrink the cavities down to microscopic size in later generations. One approach could be using on-chip superconducting resonators (e.g. stripline or 3D-integrated cavities) to replicate the function of the current 5 mm 3D cavities. However, miniaturizing without introducing loss is non-trivial – smaller resonators often have lower Q due to surface losses. Thus, there is a challenge in maintaining the remarkable coherence of the large cavities as the design moves toward a more scalable form factor. This is an active engineering problem: can QCI fabricate “micro-cavities” or other resonant structures that preserve >100 µs photon lifetimes? The answer will determine how easily the architecture can extend to hundreds or thousands of qubits within a single cryostat.

Calibration and Control Overhead

With more circuit elements per qubit, QCI’s system has many control knobs that must be calibrated – frequencies of cavities, frequencies of transmons, coupler biases, pump tones for parametric gates, etc. Tuning up a dual-rail qubit to high fidelity involves aligning the resonance of two cavities, setting the correct dispersive couplings to transmons, and calibrating the photon-swap pulses precisely (for both single- and two-qubit gates). The two-qubit gate, for instance, required a careful calibration of the swap durations and phases to maximize entangling interaction while minimizing residual excitation in the coupler.

As the system scales, the calibration problem may grow – although QCI’s emphasis on isolation could mean each qubit (or pair) can be calibrated fairly independently, which helps. The need for multiple microwave/control lines per qubit (likely a couple of flux drives for couplers and transmons, plus readout lines) also puts pressure on classical electronics and fridge wiring. Cryogenic I/O becomes a concern if thousands of lines are needed, though advances in multiplexing and cryo-control electronics (possibly in collaboration with NVIDIA/Supermicro HPC tech) may address this.

Essentially, QCI trades off qubit count for circuit complexity per qubit, which could make scaling more labor-intensive unless automation and robust calibration algorithms are developed.

Error of Erasure

While erasure errors are easier to correct in principle, they do come at a cost: if an erasure occurs and you drop that run (post-selection) or invoke an error correction procedure, there’s an overhead either in runtime or in additional qubits (syndromes).

In near-term usage, QCI’s scheme of discarding any runs with errors means you might need to run the algorithm more times to get enough error-free results. For example, if each run has a 5% chance of an erasure somewhere among all qubits, about 5% of runs are thrown out (in practice the probability grows with circuit depth). QCI’s recent hardware shows very low erasure rates per gate (~0.5-1%), so for short circuits the odds of an error-free run are quite high. Still, this sampling overhead is a consideration – algorithms will need more shots to get the same confidence if some fraction are discarded due to detected errors.

In the long run, when QEC is fully implemented, detected erasures will be corrected (replaced by erasure correction in codes) rather than discarded, so this issue disappears. But until then, users of the QPU must account for a potential efficiency hit in exchange for higher-fidelity outputs. The good news is the system provides the option to only use error-free results, which is not possible on other QPUs (where all runs have hidden errors mixed in). This trade-off is essentially quality vs quantity of shots.

Resource Overhead for Error Correction

Even though QCI reduces the qubit overhead for error correction, its approach still ultimately requires more physical qubits to do full fault tolerance. For instance, a distance-d surface code might need d2 dual-rail qubits for logical encoding. If QCI can use d about half or one-third of what a normal surface code would due to higher threshold, that’s great, but eventually to tackle huge problems like breaking RSA, a large number of physical qubits will be needed (tens of thousands as argued). QCI’s focus is to keep that number as low as possible, but it doesn’t eliminate the need to scale. Thus, QCI faces the standard scaling challenges every modality does: cryogenic infrastructure for thousands of qubits, automated control, fabrication yield for many components, etc. The promise is that if each qubit is 10× more powerful (in terms of error-corrected output), one needs 10× fewer of them – but one still needs to build those in large quantity.

QCI’s approach simplifies the logical qubit scaling but not the physical scaling beyond a certain point. Notably, QCI projects needing on the order of ~104 physical qubits for useful systems instead of ~106. Engineering 10,000 high-Q cavities and associated circuits is still a massive effort, albeit more conceivable than 1,000,000 transmons.

Integration and Footprint

One often-cited concern for cavity-based approaches is whether they can be integrated into a compact form. QCI’s near-term hardware might involve a large custom cryostat with carefully positioned 3D cavities and microwave hardware, which is fine at small scales but could become unwieldy with hundreds of qubits. Efforts will be needed to package multi-cavity systems efficiently – possibly leveraging multilayer microwave circuits or 3D stacking. It might involve combining planar and 3D techniques (for example, having an array of microfabricated cavities on a wafer bonded to a chip with transmons). Each approach must ensure that the advantages (coherence, isolation) are retained. This packaging is a non-trivial engineering challenge and a trade-off: QCI gains error performance but must solve a more complex hardware integration problem than putting qubits on a single chip.

However, it’s worth noting that even companies chasing large transmon counts (like IBM) are encountering packaging challenges (e.g. IBM’s 433-qubit Osprey has a huge chip and complicated signal delivery). So all approaches at high qubit counts require novel packaging. QCI’s might just look different (perhaps more like a modular network of cavity-transmon units).

Comparative Maturity

Another trade-off is that QCI’s technology path is relatively new and unique. Planar transmons, ion traps, etc., have large communities and decades of engineering optimizing them. Dual-rail superconducting qubits are cutting-edge – the first erasure-detected dual-rail experiments were only recently reported (2023-2024). This means some components (like fast cavity parity measurement, or large-scale cavity fabrication) are in earlier stages of development. There may be unknown issues that surface as the system scales (e.g. subtle modes of the cavities, heating issues, etc.).

By contrast, more established modalities may have already addressed many scaling wrinkles. QCI is essentially trailblazing a novel architecture, so it shoulders the risk that comes with that. On the flip side, the potential reward is leapfrogging the performance of more established qubit types (which QCI believes it can – calling conventional high-qubit-count strategies “the wrong race”).


In summary, QCI’s superconducting dual-rail cavity platform offers significant strengths: long coherence, a clean and biasable error channel, high gate fidelities at fast speeds, and inherent compatibility with error-correcting techniques. These advantages directly support its goal of efficient fault tolerance. The trade-offs lie in hardware complexity and the task of scaling up an unconventional 3D architecture. By emphasizing error detection over qubit count, QCI has in effect traded the easy part of simply adding qubits for the hard part of mastering those qubits’ errors – a trade most other companies do in reverse. So far, QCI has shown that this trade yields excellent per-qubit performance. The coming engineering work will determine how well that can be sustained as the number of qubits grows. If successful, QCI’s modality could combine the best of both worlds – the speed and controllability of superconducting circuits with the robustness of an error-resilient encoding – but it must navigate the scaling challenges inherent to multi-component qubits.

Track Record

To evaluate QCI’s track record, we consider its technical performance achievements, consistency in hitting stated goals, publications, and partnerships/commercial deployments:

Technical Performance Milestones

QCI has delivered on several key technical promises of its platform. The company claimed from the outset that it would build “more powerful qubits with built-in error detection” – a claim substantiated by the introduction of the dual-rail cavity qubit (the first of its kind in the industry). By 2024, QCI had successfully demonstrated an 8-qubit processor where each qubit features hardware-level error detection, exactly as their approach envisioned. In terms of qubit quality, QCI’s prototypes have shown state-of-the-art coherence and fidelity. The Yale/NIST team associated with QCI reported single dual-rail qubits with >1 ms coherence times and single-qubit gate fidelities >99.9% (after error post-selection). The 2025 two-qubit gate results are particularly striking: an error of only ~0.1% per CZ (ignoring heralded erasures) was measured – placing QCI’s gate performance among the best in any superconducting system. These metrics back up QCI’s marketing assertions that their approach is “demonstrating some of the best performance metrics across all hardware modalities in the industry”.

Notably, even when counting erasure events as errors, the effective gate error (~1%) and memory T₁ (~100-1000 µs) are on par with or better than contemporary superconducting qubits, validating that QCI hasn’t sacrificed base performance in pursuit of error detection. In other words, QCI’s dual-rail qubits perform like elite physical qubits while also providing extra error info, an impressive track record for a novel modality.

Publications and Scientific Contributions

QCI’s approach is underpinned by rigorous scientific work, much of which has been published in high-impact journals. The company’s close ties to Yale mean that many results were shared with the scientific community. For instance, a 2024 paper in Physical Review Letters documented mid-circuit erasure detection on a dual-rail cavity qubit, measuring an erasure event rate and confirming the feasibility of the scheme. In 2025, QCI researchers (with QCI affiliations on the author list) posted a comprehensive report on the dual-rail two-qubit gate on – this preprint details the bias-preserving CZ gate and provides thorough benchmarking of error rates as discussed. These publications not only establish QCI’s credibility in the scientific community but also guide the broader field in exploring error-biasing and erasure conversion techniques.

The QCI team includes respected quantum engineers (e.g. Kevin Chou, Nitish Mehta, etc.) and its Chief Scientist Rob Schoelkopf continues to be an active presence in research. The pioneering nature of QCI’s work is evidenced by citations: they are often referenced alongside other “hardware-efficient quantum computing” approaches like bosonic codes (cat qubits) as leading examples of alternative pathways to fault tolerance.

The company has thus maintained a strong track record of scientific contribution, balancing proprietary development with peer-reviewed disclosure of key techniques. This lends confidence that QCI’s claims are grounded in demonstrated physics.

Roadmap Adherence and Evolution

While QCI was relatively quiet about intermediate milestones for a few years (operating in stealth R&D mode from 2018 through 2021), the major public targets it announced have been met. By 2022-2023, insiders expected QCI to produce an initial QPU – and indeed the Aqumen Seeker 8-qubit QPU was delivered in 2024, which can be seen as on-schedule given the complexity of the task.

The simultaneous launch of the full stack (cloud service, SDK, simulator) indicates a mature approach to bringing up a quantum system; QCI did not rush a chip to the cloud without support tools, but rather waited until the entire user-facing ecosystem was ready. This suggests good execution discipline. The company’s stated founding goal was to tackle “the biggest challenge – reliable fault-tolerant and scalable quantum computers”, and so far they have stuck to that mission, resisting diversions into more immediate but less impactful NISQ demonstrations. In terms of timeline, after the Series A in 2017, it took about 7 years to get a product in users’ hands (2024 Alpha program), which is not unusual for deep-tech hardware startups.

Now in 2025, QCI signals that a next roadmap is forthcoming focusing on scaling with error correction at the forefront. If we gauge consistency, QCI’s messaging has been remarkably stable: from 2017 press releases through 2025 blogs, the emphasis on “hardware-efficient error correction”, “intrinsic error detection”, and “correct first, then scale” appears like a mantra. This consistent vision has translated into a coherent set of deliverables (high-coherence dual-rail qubits, error detection tech, etc.), indicating that QCI has executed in line with its roadmap even as others in the industry pivoted to chasing larger qubit counts. One minor deviation is that the CEO role transitioned – Rob Schoelkopf was CEO initially, but in early 2024 tech executive Ray Smets was brought in as President and CEO to drive commercial strategy. This seems to have been a planned move to shift from research mode to go-to-market mode, and coincided with the Series B funding and product launch, rather than reflecting any problem.

Commercial Deployments and Partnerships

Though still in an early-commercial stage, QCI has engaged a number of partners and early customers:

  • The Alpha Program (launched in late 2024) includes select enterprise and research partners who have direct cloud access to the Aqumen Seeker hardware. QCI invited these users to run advanced programs using QCI’s unique features. While participant names aren’t all public, one known partner is Algorithmiq (a quantum algorithm startup in Finland) which is using QCI’s machine for quantum chemistry simulations with advanced error mitigation. Sabrina Maniscalco, Algorithmiq’s CEO, praised the “powerful synergy” between their algorithms and QCI’s error detection capabilities, indicating that QCI’s system has already been applied to non-trivial algorithmic experiments.
  • On the hardware side, QCI’s partnership with NVIDIA and Supermicro provides a bridge to classical HPC. As detailed earlier, QCI installed NVIDIA Grace Hopper systems on-site and is collaborating with the Yale Quantum Institute and QuantumCT on leveraging GPUs for quantum error correction R&D. This partnership underscores QCI’s commitment to a full-stack solution, where classical computing is tightly integrated. It also potentially opens a sales channel in the future: if QCI packages its quantum hardware together with NVIDIA accelerated servers for enterprise deployment, it could be an attractive turnkey solution for certain customers.
  • Investors and Advisors: QCI’s investor list includes not only top VCs but also strategic entities like In-Q-Tel (connecting to US government use cases) and corporate venture arms. For instance, the involvement of Sequoia Capital (and partner Bill Coughran on the board) is notable – Sequoia is also an investor in other quantum companies, and Coughran publicly stated “they are setting themselves apart from the rest of the vendor landscape”. Such endorsements suggest that QCI has successfully convinced stakeholders of its technical trajectory. Additionally, being a Yale spinout, QCI has access to a pipeline of top quantum PhDs and has maintained close ties with academic research (some staff hold dual appointments). This has likely helped them continually recruit talent and stay at the cutting edge of superconducting circuit techniques.
  • Customer Partnerships: Beyond Algorithmiq, QCI has not disclosed many specific end-users (understandably, as the hardware only recently became available). However, they have hinted that enterprise customers are already using the system as a full-stack solution. It wouldn’t be surprising if some major corporate R&D labs or government labs are part of the Alpha program under NDA. The presence of a biotech/pharma angle (drug discovery) via Algorithmiq and possibly a finance or chemicals angle (as was mentioned in 2017 plans) suggests QCI is targeting high-value applications that can benefit from higher fidelity quantum computations. If any of these early collaborations yield a notable result (even something like a small chemical simulation done more accurately thanks to error detection), that will bolster QCI’s track record on the application side.

Meeting Roadmap Targets

While QCI has not published a detailed public roadmap with dates (unlike, say, IBM’s yearly targets), we can infer targets from their communications and see if they’ve been met:

  • Qubit Fidelity and Error Detection: Target – demonstrate qubits with error detection and >99% gate fidelity. Achieved: Yes, demonstrated by 2023-25 with single-qubit gates ~0.1% error and detection of >90% of photon loss events.
  • Multi-Qubit System: Target – build a multi-qubit prototype with full-stack control. Achieved: 8-qubit Seeker system launched 2024 with SDK, cloud, etc..
  • Full-Stack Integration: Target – provide user access and software integration (Qiskit compatibility, etc.). Achieved: QCI’s QCDL (Quantum Circuit Definition Language) and Aqumen SDK allow programs to be written and run on the QPU, including integration with Qiskit for higher-level interfaces.
  • Error-Corrected Logical Qubit: Target – (implied future goal) demonstrate a logical qubit with better stability than physical. Status: Not yet achieved as of 2025, but all necessary components (high coherence, fast feedback, etc.) are in hand. The company’s blogs suggest this is on the near-term horizon; the focus on QEC in collaboration with NVIDIA indicates they are actively designing and simulating such demonstrations. If QCI can show, for example, a small repetition code that extends the lifetime of quantum information using their error detections, that would mark a huge track record milestone – we will watch for that in late 2025 or 2026.

Overall, QCI’s track record is one of steady, methodical progress with few missteps. It has maintained technical excellence (as evidenced by peer-reviewed results) while also building a practical product (e.g. a cloud-accessible QPU).

The company has been realistic about scale – rather than claiming premature quantum supremacy or trying to race on qubit count, they have messaged the importance of efficiency and then backed it up by delivering efficient qubits. Notably, QCI’s communications have credibility: when they announced “the industry’s first built-in error detection” and “a path to fault tolerance”, they shortly after provided the data (via the Seeker QPU and published benchmarks) to support those claims.

This consistency builds trust in their roadmap. In the competitive quantum landscape, some companies have over-promised and under-delivered on timelines; QCI so far does the opposite – it focuses on under-appreciated metrics (like error rates, overhead) and delivers on those while staying relatively quiet on flamboyant claims.

In summary, QCI has established a solid track record of innovation and follow-through in superconducting quantum hardware, giving it a growing reputation as a serious contender in the quest for fault-tolerant quantum computing.

Challenges

Even with its promising technology, QCI faces several key challenges on the road to scaling its platform to fault-tolerant or cryptographically relevant scales. These challenges span engineering, physics, and competitive factors:

Scaling Hardware Size and Integration

The most immediate challenge is how to scale from the current 8-qubit device to devices with perhaps hundreds or thousands of dual-rail qubits. The dual-rail architecture, as discussed, is hardware-intensive – it requires multiple components per qubit and substantial physical space. Building a system of, say, 100 dual-rail qubits would involve 200 cavities and a few hundred Josephson junctions (couplers and ancilla), along with all the control wiring. This raises questions of mechanical and thermal engineering: can all those elements be housed in a single cryostat and kept at millikelvin temperatures without introducing vibrations or thermal load that spoil the coherence? QCI may need to innovate in packaging, possibly using 3D stacking or modular fridge units that connect smaller cavity arrays. The modularity hint was in their early plan to build a “modular quantum computer”. Perhaps smaller units of qubits can be linked via quantum interconnects (optical or microwave links). However, doing so while preserving the error-detection advantages is non-trivial. Each module would need to transmit quantum states without losing the photon or else the erasure detection advantage might be lost across modules.

In short, architecture-level scaling – going from a single prototype to a distributed or larger-array system – is a big challenge. Many hardware startups stumble in moving from a proof-of-concept to an engineering product; QCI will need top-tier engineering to design scalable cryogenic and control infrastructure for their complex qubits. They’ve taken a step by partnering with HPC companies (for classical scaling), but the quantum hardware scaling remains a mountain to climb. The company acknowledges this implicitly by the emphasis on reducing required qubit count – but even tens of thousands of physical qubits mean a major scaling effort.

Manufacturability and Yield

Closely related is the challenge of fabrication and yield. QCI’s current cavities are presumably machined or handcrafted in a lab – that works for a dozen cavities, but not for thousands. Creating superconducting cavities with consistent frequency and high Q-factor in large numbers might require novel manufacturing techniques (perhaps CNC milling with extreme precision, or lithographically defined 3D structures). Even if each cavity is slightly different, QCI could in principle tune frequencies via the transmon couplers, but too much variation could complicate multi-qubit calibration. Meanwhile, the transmon and coupler circuits likely reside on chips (perhaps attached to the cavities); scaling those without yield issues (defects, variation in Josephson junction critical currents, etc.) is a known challenge from the transmon world.

Typically, as you scale up, the chance that one component is bad increases – error correction can tolerate some bad qubits, but if too many are non-functional or low-quality, it undermines the error advantages. Quality control for every piece (two resonators, one coupler, one ancilla per qubit, etc.) will be necessary. QCI might need to implement on-chip test structures or screening methods to ensure each dual-rail unit meets spec.

This manufacturing challenge is somewhat new since few have tried to produce so many high-Q 3D resonators. Perhaps techniques from microwave filter manufacturing or photonics (e.g. wafer-level microcavities) could be co-opted. It’s an area of risk – if scaling the hardware introduces loss or reduces coherence, the entire error-correction edge could diminish.

The positive side is that QCI’s requirements per component (photon lifetimes in hundreds of µs, junction coherence in tens of µs) are within what’s been achieved; it’s the quantity and integration that pose the risk.

Maintaining Error Rates at Scale

QCI’s error rates and bias are excellent in small systems; a major challenge will be preserving those advantages as qubit count grows. In larger systems, new error sources can emerge: cross-talk between control lines, unintended cavity-cavity coupling (if electromagnetic modes overlap), or simply the cumulative probability of error growing with more operations. For example, in an algorithm running on 100 qubits, even a 0.5% erasure rate per qubit could lead to a high probability some qubit erases at least once. QCI’s answer is to correct those erasures using codes, but that only works if all other error channels remain low and independent. Ensuring that, say, dephasing or control errors don’t increase with system size is a challenge. The calibration complexity itself might cause residual coherent errors if not perfectly done, which in a big system could form correlated errors.

Also, as more qubits operate simultaneously (which will be needed for any sizable algorithm), issues like frequency crowding (even in cavities, there are only so many free spectral ranges if not carefully separated) or shared microwave lines could impose limitations.

Essentially, QCI must ensure that scaling up doesn’t introduce a “curse of dimensionality” where error rates effectively creep up or lose bias because of system interactions. This is something to watch: will the next-generation QCI QPU (perhaps 16 or 32 qubits) show the same per-gate error stats? If not, that indicates new noise creeping in. Overcoming this might involve improved shielding, better pulse shaping (to avoid leakage or unintended excitations), and using the error detection itself to actively tune up calibrations (e.g., systematically monitor erasure rates on each qubit as a diagnostic).

Real-Time Processing and Latency

Fault-tolerant operation will require extremely fast classical processing for syndrome extraction and feedback. QCI’s design is ahead of the curve here, but as the system scales the demands intensify. For instance, a surface code with dozens of check operators would produce many * flags every microsecond that need to be processed and acted on. QCI’s adoption of NVIDIA’s GPU infrastructure is aimed at this, but integrating a low-latency feedback loop between the quantum hardware and a GPU is itself a challenge. Typical cloud quantum setups have latencies in the milliseconds for feedback, far too slow for QEC. QCI may need to push more classical compute into the cryostat (some companies explore cryo-controllers or FPGA-based feedback in dilution fridges) to get latency down to sub-microsecond scales. Ensuring that all this classical overhead doesn’t negate the speed advantage of superconducting qubits is crucial. The collaboration with Supermicro/NVIDIA suggests QCI is aware and tackling it, but it’s an ongoing challenge to effectively utilize GPUs for real-time operations (GPUs excel at batch parallel tasks, whereas QEC is a streaming task). Possibly a combination of FPGA for the ultra-low latency portion and GPUs for heavy simulation/analysis will be needed.

The complexity of co-designing quantum error correction hardware and software is non-trivial – it’s almost like building a specialized supercomputer around the quantum processor. This is an area where even giants like IBM are actively researching (IBM’s Osprey system uses a custom control system with FPGAs to enable dynamic circuits). QCI being smaller, must leverage partnerships smartly to not fall behind in this classical control race.

Competition and Market Adoption

From a business and ecosystem perspective, QCI faces the challenge of proving its approach in the marketplace. Larger competitors (IBM, Google, etc.) are pursuing different routes (e.g. bigger qubit counts with heavy error mitigation, or alternate error-biased qubits like cat qubits at AWS). QCI’s narrative is compelling technically, but it must demonstrate clear wins to convince customers to choose its platform over, say, an IBM Eagle or Osprey that has dozens or hundreds of qubits.

In the near term, QCI’s 8 qubits with error detection might solve small problems more accurately than a 50-qubit noisy machine, but many users are still enamored by qubit count and quantum volume. As one QCI blog lamented, the market tends to oversimplify on qubit number, which can create “misleading perceptions”. So QCI has the task of educating the market to value qubit quality and not just quantity. This is a messaging and demonstration challenge. They will likely need to show an example of something like: “On QCI’s machine, with only 8 qubits, we ran X algorithm and got a result closer to ideal than a 50-qubit machine could,” or “we detected and eliminated errors to achieve a quantum advantage in some small task.”

Achieving a recognizable milestone (like a certain error-corrected algorithm beating the uncorrected counterpart) would greatly help validate their approach. Otherwise, there’s a risk that potential customers stay with incumbents who promise more qubits, even if noisy.

Additionally, fault tolerance is a long race – some competitors might catch up by adopting similar techniques (for example, if IBM or Google start implementing erasure conversion or bias-preserving operations, they could erode QCI’s differentiator). Indeed, QCI noted that others like cat codes are also showing progress towards low-overhead qubits. QCI will need to maintain a lead in demonstrated error rates and logical qubit performance to stay ahead.

Resource Trade-off in Codes

Another technical challenge will be choosing and implementing the right error correction codes to exploit QCI’s hardware. While surface code is a default, it might not be the optimal for erasure errors (though it has a high threshold for erasures). There might be specialized erasure codes or flag qubit schemes that work better. QCI’s team (including collaborators like Shruti Puri and Steve Girvin at Yale) have explored things like XZZX codes and erasure conversion, which could be employed. But turning these theoretical codes into real-time error correction running on actual hardware is a complex task. It will involve multi-qubit gate sequences for syndrome extraction, potentially using QCI’s ability to branch on * outcomes. Designing these protocols to minimize additional errors and fit within coherence times is a challenge. For instance, doing a full surface code cycle on dual-rail qubits – can it be done fast enough before qubits decohere? Possibly yes given the fast gates, but it needs to be proven.

Also, QCI’s bias means maybe they can use a smaller distance in one direction of the code (to correct mostly phase errors) and a larger in another – this is novel territory. The code design and verification thus is an ongoing challenge. It’s encouraging that QCI has a simulator (AquSim) to test such codes, but the real-world performance might throw surprises.

Economic and Infrastructural Challenges

Lastly, to reach CRQC scales, even with fewer qubits, QCI will require substantial capital and an expanding engineering team. The $60M Series B in 2024 is a strong boost, but developing a full fault-tolerant machine could easily demand more in the future (as dilution refrigerators, nanofabrication, etc., are costly). QCI’s relatively low public profile compared to some rivals might mean they need to work harder to attract partnerships or government grants that could subsidize large-scale experiments. The company’s connections (Yale, In-Q-Tel, etc.) might mitigate this, but it’s a pragmatic challenge to ensure funding keeps pace with technical ambition.


In summary, QCI’s challenges revolve around scaling up while keeping their error advantages intact. The fundamental physics approach seems sound; now the heavy lift is in systems engineering. They need to show that their elegant small-scale results can translate to a large, operational quantum computer. This means solving hardware scaling and integration hurdles, continuing to refine fast feedback control, and proving to the world (and potential customers) that error-corrected qubits are the true benchmark of progress, not raw qubit count. None of these challenges are trivial – they are essentially the same grand challenges faced by the whole quantum computing field, but approached from QCI’s unique angle.

The next few years will test whether QCI’s meticulously error-focused philosophy can deliver a prototype of a scalable fault-tolerant module – for instance, a handful of logical qubits working continuously with error correction. If they succeed, it will validate not just QCI’s technology but the broader notion that quality can beat quantity in quantum computing.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap