Quantum Computing Companies

Quantinuum

(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)

Introduction

Quantinuum, formed by the 2021 merger of Honeywell Quantum Solutions and Cambridge Quantum, is another leader in trapped-ion quantum computing. It combines Honeywell’s hardware prowess with Cambridge’s algorithm/software expertise. Quantinuum’s roadmap is notably direct about pursuing fault tolerance, and they’ve recently accelerated their timeline.

Milestones & Roadmap

In September 2024, Quantinuum announced an “accelerated roadmap” to achieve a universal, fully fault-tolerant quantum computer by 2030. This was accompanied by details of their forthcoming hardware: a fifth-generation ion-trap processor named “Apollo”. The Apollo system is targeted to have 100+ qubits; as of the announcement, they had a prototype with 56 qubits and a record-high quantum volume over 2,000,000. (Quantum Volume is a holistic performance metric; surpassing two million is quite significant, indicating very low error rates on a sizable number of qubits). Quantinuum’s previous systems were the H-series (H1 with 10-20 qubits, H2 with 32 qubits). Apollo seems to be the successor, likely using newer trap technology and perhaps more integrated optics.

One headline milestone for Quantinuum was their achievement of 12 logical qubits in 2024, in a demonstration done jointly with Microsoft researchers. They achieved “three 9’s” fidelity (99.9%+) on certain operations with those logical qubits. This strongly hints they encoded 12 qubits in some error correcting code, possibly the [[7,1,3]] color code or similar, across a larger number of physical qubits, and were able to perform gates and measure with only 0.1% error. If accurate, this is one of the first multi-logical-qubit experiments in the industry, and at very high fidelity. It suggests that Quantinuum’s error rates on physical qubits are so low that even small-distance codes can be effective. This is a major step toward fault tolerance.

Quantinuum’s roadmap to 2030 includes developing modular and scalable architectures as well. They plan to use photonic connections between ion trap modules (similar to IonQ’s vision) and also explore multi-layer traps. Since Honeywell came from an atomic clock/precision control background, their ion traps use unique techniques like rotating trap electrodes and a rigorous metrology-like approach to error reduction. Each hardware generation has dramatically improved: H1 had quantum volume 128, H2 reached 2,048, and now Apollo >2,000,000. By 2030, Quantinuum aims for a fully error-corrected machine that can run “millions of operations on hundreds of logical qubits” (as per their press statements). This sounds similar in scale to IBM’s plan (~hundreds of logical qubits), albeit maybe a year or two later, but still around that timeframe.

Focus on Fault Tolerance

Quantinuum is explicitly focused on FTQC. CEO Rajeeb Hazra stated, “We possess the industry’s most credible roadmap toward achieving universal fully fault-tolerant quantum computing.” They plan to incrementally incorporate error correction: possibly demonstrating a single logical qubit benchmark in 2025, a small error-corrected circuit by 2027, and a full fault-tolerant subsystem by 2030. One strategy Quantinuum can leverage is “gauge” or “subsystem codes” and dynamic circuits – their system is good at mid-circuit measurement and reuse (they can measure one ion and immediately use that result while the rest of the computation continues). This is perfect for implementing quantum error correction cycles, where you periodically measure syndrome qubits and correct errors on the fly. Indeed, their 12 logical qubit demo likely involved continuously correcting errors during the experiment. Quantinuum also benefits from Cambridge Quantum’s algorithms team, which works on things like optimized compilers and error mitigation – these will help bridge to full fault tolerance by squeezing whatever capability is needed from hardware as it improves.

CRQC Implications

If Quantinuum stays on track, by 2030 they could have a machine with on the order of 100+ logical qubits (the phrasing “hundreds of logical qubits” by 2030 was used). While that might not immediately factor RSA-2048, it edges much closer to CRQC territory. A few hundred logical qubits might factor numbers of a few hundred bits, or break certain symmetric crypto if Grover’s algorithm is applied, etc. Also, because Quantinuum’s ion qubits are high fidelity, the resource overhead for cryptographic algorithms might be slightly less. For instance, in a hypothetical scenario, maybe 1000 logical qubits on a trapped-ion FTQC could do what 2000 logical qubits on a superconducting FTQC do, because of deeper circuits tolerable. In any case, Quantinuum’s stated goal is to tackle “large-scale scientific and commercial applications” by 2030 – which in context could include things like simulating complex molecules (requiring many logical qubits for quantum chemistry) or solving optimization problems that are classically intractable. Cryptography is certainly among the potential applications (for national security, etc.). Given that Honeywell (a parent company) and the governments supporting Quantinuum have interest in cryptography, it wouldn’t be surprising if factoring a large number becomes a target once their machine is ready. We should note too: Honeywell’s background in aerospace/defense means Quantinuum may have classified or non-public projects in quantum for defense, possibly related to crypto or sensing, which could accelerate CRQC behind closed doors.

Modality & Strengths/Trade-offs

Quantinuum uses trapped-ion qubits, like IonQ, but with a slightly different style. They trap Ytterbium and use a QCCD architecture – meaning they physically shuttle ions between different zones on a chip for different operations. For example, certain zones might be memory zones, others interaction zones. This allows parallel operations and mitigates some crosstalk, at the cost of a very complex electrode control system. Honeywell (now Quantinuum) demonstrated impressive feats like transporting ions with negligible error, and even doing mid-transport quantum gates. The strength here is ultra-high fidelity: their two-qubit gate fidelities have been reported around 99.8-99.9%, and single-qubit > 99.99%. They also have low crosstalk and can perform multiple 2-qubit gates simultaneously in different parts of the trap.

Additionally, they integrated mid-circuit measurement early on; their H1 processor could measure one ion while leaving others untouched – something not all platforms can do easily. This is invaluable for error correction (you can measure syndrome qubits without collapsing data qubits). Quantinuum also boasts the best quantum volume records so far, which speaks to the balanced performance of their machines.

Another strength: the company is full-stack – they have their own quantum software stack (TKET compiler, etc.) and algorithm experts, which help optimize circuits to run within the hardware limits.

Trade-offs: The QCCD approach is mechanically complex. It involves literally moving ions in microscale traps, which can create heating and require painstaking calibration. As the number of ions grows, controlling all possible trajectories and split/recombine operations becomes extremely challenging. There’s also a speed trade-off: shuttling ions takes milliseconds, which can slow down the clock speed of the computer. Quantinuum mitigates this by designing parallelism – e.g., while some ions are moving, others might be computing – but it’s still not as fast as a solid-state qubit device.

Another trade-off is scaling the number of trapping zones; more zones mean a larger chip and more electrodes and control lines, which eventually bumps into engineering limits. That’s why they, like IonQ, will need a modular approach (multiple trap chips connected). They’ve shown the ability to entangle ions on separate chips via photonic links in the lab, but it’s not yet in the commercial machines. Achieving that with high rate and fidelity is an ongoing project. Also, while mid-circuit measurement is great, it requires efficient photon detection – their detectors and optics need to catch fluorescence from ions quickly and with low error; doing that for many ions at once is hardware-intensive.

Track Record

Honeywell surprised the quantum world by emerging around 2019 with a working quantum computer despite being new to the field. They quickly rose to leadership in certain benchmarks. They hit all their roadmap marks: the H0 prototype, then H1 (10 qubits, upgradeable to 20), then H1-Enhanced (up to 12 qubits fully connected, which achieved QV 1024), then H2 (which has 32 qubits, though not all used simultaneously in published metrics; QV > 2 million as noted). Each iteration improved coherence and gate speed. Notably, they were first to implement CCZ three-qubit gates directly, and to demonstrate quantum volume above 1 million.

By merging with Cambridge Quantum, they gained a robust software suite and use-case portfolio (quantum chemistry, etc.), which means their roadmap is driven by both hardware capability and the needs of algorithms. Quantinuum has also secured significant funding and revenue through corporate deals (e.g., partnering with Japanese pharma for chemistry, with DHL for logistics optimization, etc.), giving it stable footing to pursue the long roadmap.

In 2022, they demonstrated an end-to-end encryption flow using quantum one-time pads and their computer – a crossover between quantum computing and quantum communication. Small but meaningful milestones like that show they’re exercising the system in various ways.

Importantly, in 2023 they open-sourced their TKET compiler, signaling confidence that the differentiation is in hardware and algorithms, not just software. So their track record is of a company systematically tackling the hardest parts of quantum computing (error rates, integration, algorithms) and often being the first to showcase new capabilities (like high fidelity logical qubits).

Challenges

Quantinuum’s aggressive 2030 goal means they have to scale from 32 to perhaps thousands of qubits in ~7 years – a steep curve. They will need to demonstrate photonic interconnects between at least 2 or 3 traps in the next couple of years to prove modular scaling. That involves matching ion qubit emission to single-photon detectors, interference of photons for entanglement, etc., with high reliability. That’s a quantum networking challenge as much as a computing one.

Also, if they pursue a larger monolithic trap in the interim (maybe a 64 or 128-qubit trap chip), they face the same vibrational mode management issues IonQ would at that scale.

Engineering-wise, they must also build more control electronics (their current system might use tens of analog waveform generators; scaling to 100s of qubits may require custom ASIC drivers to shrink the footprint).

On the software side, managing a fault-tolerant system with perhaps many error correction cycles per second will require tight integration between classical and quantum processors. Quantinuum might leverage partnerships (like with Microsoft Azure) to offload some tasks to the cloud or to classical co-processors.

A challenge unique to being merged from two companies is ensuring seamless culture and focus – but so far, that seems to have gone well, with Cambridge’s founder Ilyas Khan initially as CEO and now handing to Hazra (ex-Intel) who is very technically focused on execution.

The company’s bold marketing (claiming “only company with a clear path to FTQC by 2030”) will certainly be tested; competitors might dispute that, but if Quantinuum delivers even half of what it claims, it will remain in the top tier of quantum builders.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap