Quantum Computing Companies

IQM

(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)

Introduction

IQM Quantum Computers is a Finland‑based hardware company building superconducting (transmon) quantum processors with a distinctly European strategy: deliver on‑premises systems tightly integrated with high‑performance computing (HPC) centers while co-designing architectures for error correction and, ultimately, fault tolerance. Rather than focus on cloud-only access or headline qubit counts, IQM emphasizes deployable machines, open low-level control for researchers, and chip topologies tailored to quantum error-correcting codes. Its near‑term deliverables: 54-, 150-, and 300-qubit systems for LRZ (Germany) and VTT (Finland); are positioned as stepping stones toward hundreds of logical qubits by around 2030 and, longer term, million-qubit-class fault-tolerant systems. Key technical levers include high‑fidelity transmons, tunable couplers, and two complementary layouts (“Crystal” grids and “Star” resonator hubs) intended to make quantum LDPC codes more resource-efficient than conventional surface-code lattices.

Milestones & Roadmap

Finland-based IQM Quantum Computers has emerged as Europe’s leading superconducting quantum hardware startup, and in mid-2025 it announced major roadmap milestones centered on delivering scalable systems for both research and industry. IQM’s strategy is distinguished by providing on-premises quantum computers integrated with high-performance computing (HPC) infrastructure, while charting a path to fault tolerance by 2030. Recent milestones and plans include:

2020-2024: IQM built a series of prototype superconducting processors, delivering 5-qubit, 20-qubit, and 50-qubit systems to Finland’s VTT research center as part of the country’s quantum initiative. In 2023, IQM installed a 20-qubit “IQM Radiance” system at the Leibniz Supercomputing Centre (LRZ) in Munich, Germany – the first integration of a quantum accelerator into an EU supercomputer facility. These early systems established IQM’s ability to produce functional mid-size quantum processors and tightly couple them with classical HPC environments.

2025: As part of the EuroHPC Joint Undertaking, IQM will deliver a 54-qubit superconducting quantum computer to LRZ in late 2025. This machine (an upgraded Radiance model) will be accessible to European researchers via the Euro-Q-Exa program, enabling hybrid quantum-classical workflows on a larger scale. Additionally, IQM secured an agreement to provide two quantum computers to the Czech Republic’s national center (announced May 2025), further expanding its footprint (this detail from regional news underscores IQM’s aggressive push across Europe). By the end of 2025, IQM expects to have delivered multiple >50-qubit systems and demonstrated basic error-correction building blocks on them (e.g. small logical qubits or error mitigation techniques).

2026: IQM is scheduled to deliver 150-qubit superconducting processors to two different national projects. The first 150-qubit system will go to LRZ (Germany) by end of 2026 as the second phase of Euro-Q-Exa. In parallel, IQM will deliver another 150-qubit machine to VTT in Finland by mid-2026. These will be some of the largest gate-based quantum computers in the world upon delivery, on par in qubit count with IBM’s planned 2025-26 devices. Both 150-qubit systems are purpose-built for quantum error correction (QEC) research – they will feature tunable couplers, fast qubit readout, and pulse-level control access, allowing researchers to experiment with error-correcting codes and multi-qubit entanglement at scale. Achieving 150 operational qubits will be a major validation of IQM’s technology, roughly a 5× leap from its 2024 devices.

2027: In a landmark project with VTT, IQM will deliver a 300-qubit quantum computer in 2027. This system, comprising two 150-qubit processors connected together, is aimed at being the world’s largest superconducting quantum computer procured to date. It will be integrated into Finland’s national supercomputing infrastructure. The 300-qubit machine is explicitly designed as a testbed for fault-tolerant quantum computing, where researchers can run millions of quantum operations and push towards useful logical qubits. By 2027, IQM’s roadmap envisions having demonstrated on the order of dozens of logical qubits (if component error rates allow) and quantum advantage in certain tasks via these large systems.

2030 and Beyond: IQM’s long-term roadmap, as published on its site, targets “fully fault-tolerant quantum computers with 1M qubits” in the years beyond 2030. By 2030, IQM aims to realize “hundreds of high-precision logical qubits” and achieve quantum advantage across multiple industries. In practice, this suggests an intermediate milestone of a few thousand physical qubits per system by ~2030, using advanced error correction (IQM focuses on quantum LDPC codes) to yield ~100-500 logical qubits. IQM’s strategy to scale further includes novel chip architectures (they developed a modular **“Star” topology resonator coupling many qubits, and a high-connectivity “Crystal” topology) and long-range couplers for modular expansion. The company expects to combine these innovations with improved fabrication (their new fabrication facility can output tens of quantum chips per year) to iterate upward in qubit count. While 1,000,000 qubits is a far-off aspiration, IQM’s concrete near-term deliverables (54, 150, 300 qubit systems) position it as a serious player in the global race, giving Europe a stake in the roadmap to fault tolerance.

Focus on Fault Tolerance

IQM’s roadmap is firmly oriented toward fault-tolerant quantum computing, with explicit QEC-focused deliverables by the mid/late 2020s. The 150- and 300-qubit systems being built are described as QEC demonstrators, meant to test and refine error correction technology in a realistic setting. IQM’s philosophy is to co-design hardware and error correction from the ground up: for example, they have developed unique qubit topologies (named “Crystal” and “Star”) that are tailored for efficient error-correcting code implementation. The Crystal topology is a standard grid with high fidelity (demonstrated 99.9% CZ-gate fidelity on a 2-qubit test chip), suitable for surface codes or similar, while the Star topology uses a central resonator to connect many qubits, enabling a form of hub-and-spoke connectivity ideal for certain quantum LDPC codes. By merging these designs, IQM aims to implement QLDPC codes that can achieve the same logical error rate as surface codes with 2-10× fewer physical qubits. This is critical for reaching fault tolerance sooner; fewer qubits per logical qubit means the threshold of “hundreds of logical qubits” is attainable with thousands instead of millions of physical qubits.

The company explicitly states goals like achieving a logical error rate of 10-9 (one in a billion chance of error per logical operation) using these techniques. Such an error rate would enable circuits deep enough for many practical algorithms. To that end, IQM’s near-term focus (2025-2027) will be demonstrating small logical qubits and break-even error rates – where a logical qubit’s error is on par with a single physical qubit’s error, a key inflection point. By 2028-2030, IQM expects to combine multiple logical qubits and perform logical operations among them, essentially building a fault-tolerant prototype computer. They’re targeting “early quantum utility” via partial fault tolerance even before full fault tolerance is achieved (for example, using QEC on some qubits while others run in NISQ mode for a hybrid approach).

IQM’s collaboration strategy underscores its fault tolerance push. It was one of two companies (alongside France’s PASQAL) selected by the European OpenSuperQ+ program to build quantum accelerators, specifically emphasizing integration with HPC and error correction research. Additionally, IQM joined DARPA’s ONISQ program and bid on the U.S. US2QC – indicating its willingness to have its fault-tolerance approach vetted by external experts (though it wasn’t selected as a US2QC finalist, the participation shows IQM aligning with global fault-tolerance efforts). The company’s hardware provides pulse-level control and open access to lower layers, which is crucial for researchers testing QEC protocols. IQM has even co-developed a quantum computing toolkit with tight classical-quantum integration, allowing fast feedback loops – essential for real-time decoding in error correction.

In summary, every major system on IQM’s roadmap is a step toward fault tolerance. The 54-qubit and 150-qubit machines will likely be used to run small codes (like distance-3 or 5 surface codes, or tiny LDPC codes) to gather data on error syndromes. The 300-qubit machine in 2027, being two chips, may test modular quantum error correction (perhaps distributing logical qubits across chips, or entangling logical qubits between the two 150-qubit modules). By 2030, IQM envisions a prototype with “hundreds of logical qubits” protected by QEC – a scale at which fault-tolerant algorithms (e.g. some chemistry simulations, optimizations, or crypto experiments) become feasible. While this timeline is bold, IQM’s concentrated focus, especially with national and EU support, gives it credibility. Europe’s goal of a fault-tolerant quantum computer by 2030 dovetails with IQM’s plans, effectively positioning IQM as a prime candidate to deliver that capability on the continent.

CRQC Implications

Like other players pursuing fault tolerance, IQM’s progress will influence the timeline for Cryptographically Relevant Quantum Computing. If IQM achieves hundreds of logical qubits by ~2030 on a superconducting platform, it enters the same ballpark that IBM and Quantinuum have forecast for CRQC milestones (they likewise target a few hundred logical qubits by 2029-2030). A few hundred logical qubits would likely not be enough to break RSA-2048 directly – current estimates suggest around 1000 logical qubits (with very low error rates) might be needed for a one-week factoring of RSA-2048. However, demonstrating even ~100 logical qubits would be a significant fraction of that goal and could factor smaller keys or run lengthy cryptographic algorithms as proofs of concept.

IQM’s emphasis on high-quality qubits and faster error correction codes (QLDPC) could lower the overhead for cryptographic algorithms. If their qubits achieve ~99.95% fidelity and logical qubits have error rates of 10-9 or better, the resource cost for running Shor’s algorithm or other cryptanalysis reduces. For instance, a logical qubit error rate of 10-9 might allow each logical qubit to perform ~108 operations before an error – meaning a few hundred logical qubits could execute the billions of operations required for factoring smaller RSA keys without failing. As a result, we might see IQM attempt a record-setting crypto experiment in the late 2020s: perhaps factoring a 512-bit or 1024-bit RSA number as a demonstration of its fault-tolerant capability (just as a milestone, not a real threat to security yet). Such a feat would galvanize the urgency for post-quantum cryptography in Europe.

Strategically, IQM provides a diversified path to CRQC that doesn’t rely on US tech giants. Governments in the EU (and potentially countries like Finland and Germany backing IQM) might leverage IQM’s roadmap to claim technological parity in the quantum race. If IQM’s 300-qubit machine in 2027 shows even elementary cracking of a crypto algorithm (e.g. running Grover’s algorithm on a small database, or breaking a toy elliptic curve), it would validate Europe’s investment. By 2030, should IQM produce ~200 logical qubits, one could imagine it being only an order of magnitude away from full CRQC capability – which, extrapolating trends, might put European hardware within reach of breaking RSA by the early-to-mid 2030s. This timeline aligns with warnings (like NSA and EU reports) that we must migrate to quantum-safe cryptography by 2035 at the latest.

In simpler terms, IQM’s success would mean CRQC is no longer the domain of just IBM/Google. Multiple independent platforms achieving hundreds of logical qubits by 2030 greatly increase the chances that at least one will hit the CRQC threshold. It’s worth noting that superconducting qubits (IQM’s modality) have fast gate speeds, so a machine with sufficient qubits could potentially factor a large number faster than, say, an ion-trap machine of equal logical qubits (though error rates and parallelism factor in). If IQM follows the roadmap and if error correction scales well, an IQM system with, say, 1024 physical qubits (maybe ~50 logical) might attempt a small cryptographic challenge (like breaking RSA-256 or some symmetric key brute force) as a milestone demonstration. Each such step tightens the timeline for adversaries to harvest encrypted data now and decrypt later.

In conclusion, while IQM’s 2030 goal (~100s logical qubits) likely won’t single-handedly break strong cryptography, it significantly narrows the gap. It reinforces that CRQC will not require millions of qubits if error correction and algorithms improve – perhaps under a million physical qubits might suffice. IQM’s advancements in QEC (especially if QLDPC codes prove efficient) could even reduce the logical qubit count needed for RSA. All of this underscores the importance of PQC migration, as multiple teams (now including IQM) are on trajectories that make a 2048-bit RSA-cracking machine plausible around 2030-2033. IQM’s roadmap shows that a concerted effort, even from a startup, can credibly aim for CRQC-relevant scale on similar timelines as the big players – a sign that the global competition is driving faster progress.

Modality & Strengths/Trade-offs

IQM utilizes superconducting qubits (transmons), the same general modality as IBM and Google, but with some design twists. Its qubits are implemented on chips using aluminum or niobium Josephson junction circuits, operated at ~10 millikelvin in dilution refrigerators. A primary strength of superconducting qubits is their fast gate speed – two-qubit gates (typically microwave-controlled cross-resonance or iSWAP gates in transmons) can execute in ~50-200 nanoseconds. This allows millions of gate operations per second, which is advantageous for running deep circuits (given sufficient error correction). Indeed, IQM has reported industry-leading two-qubit fidelities of 99.9% on a test device, and it is targeting 99.95% on larger systems. If those fidelities hold as qubit count grows, the error rates per gate (10-3 to 10-4) combined with fast gates make a solid foundation for QEC.

Another strength of IQM’s approach is the unique chip topologies it has developed. The IQM Crystal topology is a standard grid of transmons (4 nearest neighbors), similar to IBM’s heavy-hex lattice, optimized for parallel gate operations and compatibility with surface codes. The IQM Star topology, however, is novel: it uses a central bus resonator that each qubit can couple to, giving effectively all-to-all connectivity (any qubit can interact with any other, via the resonator). A 24-qubit “Star” chip is already online via IQM’s cloud (Resonance platform). This high connectivity can significantly reduce circuit depth for algorithms and enable efficient encodings for certain LDPC error-correcting codes (which often require non-local parity checks). By combining Star and Crystal, IQM can create architectures where groups of qubits have local clusters with global links – ideal for implementing more powerful error correction than a simple 2D grid allows. In essence, IQM is tailoring hardware to code, which is a notable strength: they’re not just relying on brute-force increase in qubits, but making each qubit more useful through smarter connectivity.

IQM’s systems are also built for HPC integration and on-prem deployment. This is a strength in terms of practical usability: the machines come with a full stack (control electronics, software) that can plug into supercomputers. They have developed an SDK and control stack that supports “tight” integration – low-latency classical-quantum interfacing, job scheduling with HPC queues, etc. The ability to install a quantum system on-site (as opposed to cloud-only access) appeals to national labs and certain enterprise customers who need control over the hardware or have security constraints. IQM touts that it has delivered more quantum computers in the past year than anyone else, precisely because of this model of selling and delivering complete systems (to VTT, LRZ, and others). This approach also means IQM’s hardware is user-accessible at low levels – researchers can get pulse-level control and even modify how qubits are calibrated or how error mitigation is done. That openness is a strength for accelerating research: it enables rapid iteration on things like custom gate sequences or novel error correction protocols, which a more black-box cloud service might not allow.

However, superconducting qubits come with well-known trade-offs. They require complex cryogenics – large dilution fridges and extensive microwave control wiring. As qubit counts approach hundreds (like IQM’s 150-qubit plan), engineering challenges include dealing with hundreds of coaxial cables or adopting cryogenic multiplexing and on-chip control. IQM has indicated it’s employing techniques like tunable couplers to reduce residual interactions (improving fidelity), and likely uses heavy multiplexing for readout to manage wiring. Still, scaling to 300 qubits in a single (or dual) fridge will test the limits of today’s cryo hardware.

Another trade-off is that superconducting qubits have limited coherence times (typically 50-200 microseconds for transmons). Even with IQM’s high fidelities, a gate sequence longer than a few thousand operations will see errors accumulate without error correction. This makes achieving logical qubits imperative. IQM is somewhat mitigating this by focusing on faster gates (they mention ultra-fast control pulses and high reset/readout speeds in their design). The faster you can do gates and cycle the code, the more error correction cycles you can execute before qubits decohere.

One advantage IQM has as a startup is flexibility in innovation. For example, they can integrate new materials or 3D integration quicker if needed. They mention enhanced cleanroom facilities for novel chip fabrication (to implement “unique long-range connections” and compact packaging). Possibly, IQM could experiment with chip-to-chip quantum links (their 300-qubit uses two chips, likely connected by microwave or RF couplers). If they demonstrate a modular coupling that’s high-fidelity, that’s a big strength for future scalability (IBM and others are pursuing similar approaches, but IQM might find a simpler proprietary solution for the 150+150 link).

In terms of weaknesses/trade-offs specific to IQM: as of now, their largest deployed processor is ~20 qubits, so they have yet to prove performance at 50+ qubit scale. There is execution risk in going from 20 to 50 to 150 qubits in just a couple of years. Yield in fabricating larger chips is one concern – maintaining 99.9% fidelity across 150 qubits means every junction and resonator on the chip must be nearly perfect. IQM’s investment in its own fabrication suggests it’s addressing this by controlling the manufacturing process. Indeed, IQM reported improved uniformity and longer coherence in its newer chips (e.g., the 20-qubit delivered to LRZ). But until the 54-qubit system is live and meets specs, scalability remains an unproven area.

Another trade-off: competition and differentiation. Superconducting qubits are a crowded field with IBM, Google, Rigetti, etc. IQM’s choice to focus on on-premise delivery and European collaboration is a smart differentiator, but technically, they face the same coherence and crosstalk challenges. Their Star topology’s high connectivity could come at the cost of increased cross-talk or difficulty in calibrating many coupling modes. Ensuring that this doesn’t introduce instability will be important.

In summary, IQM’s modality offers speed and a mature knowledge base (decades of superconducting R&D to draw on), and the company’s innovations in topology and integration play to that modality’s strengths. The trade-offs will be in engineering complexity: cooling, wiring, and maintaining coherence as scale grows. Given IQM’s current trajectory and partnerships, their strengths – high fidelity, unique architectures, and HPC integration – appear to outweigh the challenges, positioning them well as a superconducting contender.

Track Record

IQM has built an impressive track record in a short time, especially in delivering on its promises to customers. Founded in 2018, IQM rapidly grew from a university spin-off into a company that by 2023 had delivered multiple working quantum computers to end-users – a claim only a few companies worldwide can make. In 2020, IQM set up a 5-qubit system at VTT (Finland) as a proof-of-concept. By 2021-2022, it had a 20-qubit prototype operational. Notably, in summer 2024, IQM and LRZ launched Germany’s first integrated quantum-HPC system (the 20-qubit IQM computer attached to an HPC cluster). This was delivered on schedule and was a pioneering achievement for Europe.

IQM’s participation and success in competitive procurement projects is a strong indicator of its credibility. In October 2024, it won the Euro-Q-Exa contract to supply 54- and 150-qubit systems to LRZ, beating out other contenders. The selection committee obviously trusted IQM’s roadmap and technical ability to scale from 20 to 50+ qubits in one year and to 150 in two years. Similarly, in May 2025 IQM signed with VTT to build 150 and 300 qubit machines – again reflecting confidence from the Finnish government based on IQM’s earlier deliveries of smaller systems. These project wins effectively validate IQM’s track record: they have consistently met interim goals (like building the 5, 20, 50 qubit devices) such that customers now expect them to hit the bigger targets as well.

On the technical publication side, IQM’s team has reported advances in qubit coherence and gating. In 2022, IQM researchers published results on a 2-qubit module achieving 99.9% two-qubit gate fidelity – an important milestone that put them on the map for quality, not just quantity. They’ve also demonstrated novel two-qubit gates with tunable couplers that suppress unwanted interactions, which is crucial for scaling. Their quantum volume and other benchmarks haven’t been publicly touted as much (likely because their focus has been more on delivering hardware to partners rather than running cloud benchmarks), but anecdotal evidence from partners suggests the systems perform as expected for their size.

Importantly, IQM has shown a pattern of under-promising and over-delivering in certain cases. For example, while they were known to be working on a 50-qubit device, the announcement of jumping to 150 and 300 qubits with concrete timelines was a positive surprise for the community. The fact that they could make that announcement (with funding and contracts secured) implies they had internally solved some key scaling issues (like modular chip architecture) earlier than anticipated. Their resource backing is solid: IQM has raised substantial venture funding (over €200M by 2023) and benefits from government grants in Finland and EU. This has allowed them to expand their fabrication and hire talent from big quantum companies (some engineers from IBM and Google have joined IQM, bringing expertise).

In terms of meeting roadmap goals: IQM’s 2020 internal roadmap likely aimed for ~50 qubits by 2024, which they achieved. The new roadmap targets (54 in 2025, 150 in 2026, etc.) are aggressive, but IQM’s execution so far – delivering systems on time to LRZ and others – gives reason for optimism. They have also formed strong partnerships (e.g., with Atos/Eviden for quantum control integration, and with other startups for software). This ecosystem approach has helped them avoid some pitfalls (they aren’t trying to do everything alone; they leverage HPC centers’ expertise for integration, etc.).

One area where IQM’s track record is notable is customer enablement. The early systems at VTT and LRZ have been actively used by researchers; IQM has reported feedback from those users that informed their next designs. This iterative, client-focused approach (similar to how classical computing companies co-design with big customers) means IQM’s hardware and software stack is relatively user-friendly and robust. For instance, they have a functional cloud service (IQM Resonance) where external users can run on a smaller 5-qubit and 24-qubit system. Maintaining a cloud service and an on-prem product line simultaneously is not trivial, and IQM has managed both, indicating good software and operations practices.

In summary, IQM’s track record is characterized by rapid scaling, on-time deliveries, and gaining trust from major stakeholders. The company has hit its early milestones (multi-qubit prototypes, national project deliveries) and has used those successes to catapult to more ambitious goals. While the biggest challenges lie ahead (no one has yet built a 150-qubit high-fidelity superconducting machine), IQM’s past performance – completing projects in a timely manner and achieving world-class fidelities – bodes well. It’s also worth noting they have faced no known major setbacks publicly; no reports of, say, a promised system failing to work. This clean track record builds confidence that IQM might indeed realize its roadmap on schedule or close to it, a crucial factor in the breakneck timeline of quantum tech.

Challenges

Despite its strong progress, IQM faces several challenges on the way to its lofty goals. The first and most immediate is technical scaling: moving from a 20-50 qubit regime into the 100+ qubit regime. As of 2025, only one organization (IBM) has demonstrated >100 superconducting qubits on a single processor. IQM’s plan to deliver 150-qubit chips by 2026 means it must tackle issues of yield and uniformity in fabrication. Superconducting qubits can suffer from fabrication defects (e.g. microscopic two-level system defects in junctions) that can render some qubits too short-lived or too error-prone. With 150 qubits on chip, the probability of having “bad” qubits or couplers goes up. IQM will need to push its fab process to the cutting edge to ensure high yield of functional qubits. This might involve rigorous materials research (to minimize loss tangent in dielectrics, etc.) and extensive testing. While their 2024 Nature paper indicated improved uniformity across qubits, maintaining that at 150 qubits is uncharted territory. IBM mitigates this by having redundancy (e.g., not all qubits need to be used if some are bad) and by iterating designs; IQM, with fewer shots on goal, might have less room for error. Any significant yield issues could delay deliveries or reduce performance (if they have to operate chips with a few disabled qubits).

Another challenge is heat and control electronics. A 150-qubit system potentially means hundreds of control lines (flux bias lines, microwave drive lines, readout resonators). Even with multiplexing, the sheer I/O and heat load in the cryostat can be problematic. Each coax cable carries heat into the fridge; with too many, the fridge might struggle to keep the base temperature. IQM will likely need to adopt advanced techniques such as cryogenic microwave multiplexers, cryo-CMOS control chips, or photonic fiber I/O to handle scaling – these are areas of active research, and integrating them in time is a challenge. Additionally, two 150-qubit chips (for the 300-qubit system) means a modular coupling solution must be in place. That could be a microwave link or perhaps an optical link (though optical-to-superconducting transduction is very cutting-edge). Ensuring that two chips can entangle qubits with high fidelity is non-trivial; even IBM has yet to demonstrate entangling 1000+ qubits across chips. IQM’s timeline suggests they have something in mind (possibly a resonator bus between chips or a novel packaging). Making that work reliably by 2027 is a big technical hurdle.

On the software and error correction front, a challenge will be to actually use these large machines effectively. Running a 300-qubit device with partial error correction will require very sophisticated calibration and control software. IQM’s openPulse approach is good for flexibility, but when pushing to logical qubits, they’ll need real-time feedback and feedforward. Implementing a fast classical decoder (likely on FPGAs or classical co-processors) that can handle QEC on 150+ qubits is a software/hardware co-design challenge that few have solved. IBM, for instance, is developing custom decoding ASICs for similar purposes. IQM will have to either develop or adopt such technology to stabilize logical qubits in real-time. Any lag in this could hamper achieving the targeted logical qubit performance.

IQM also faces competitive pressure and the need for differentiation. As multiple superconducting efforts converge on fault tolerance, IQM must demonstrate clear advantages, be it in connectivity (Star topology) or integration (HPC coupling). If, hypothetically, by 2026 IBM or Google have a 1000-qubit chip, IQM’s 150 might seem less impressive – so they are racing a bit against time to show results before the giants pull further ahead in scale. There is also Rigetti, which (despite struggles) aims for 100+ qubits in a modular form, and Chinese groups, etc., which might unveil big chips. Thus, IQM has to execute nearly flawlessly to maintain a leadership position in Europe and a competitive position globally. This is challenging for any startup, as even small delays can cause them to be eclipsed by better-funded rivals.

Another challenge lies in talent and resource allocation. IQM grew fast to ~300 employees, but building million-qubit visions may require many more specialized engineers. Recruiting and retaining top quantum engineers in competition with Big Tech and other well-funded startups is ongoing risk. Moreover, IQM is juggling multiple contracts (EuroHPC, VTT, possibly others) simultaneously – delivering on all of them on tight timelines will stretch their team. Meeting one deadline cannot come at the expense of another project. Program management in this multi-project environment is critical; any slip in one could cascade (for example, if the 54-qubit for 2025 is late, it compresses time to debug issues before the 150-qubit delivery).

Finally, financial and market challenges exist. While IQM has strong European support, the quantum industry is inherently high-risk. If, for instance, one of their delivered systems underperforms or a milestone is missed, investor sentiment could shift. Also, as quantum computing moves from demonstration to early adoption, customers will expect not just qubits but also applications. IQM will need to show that its machines can do something useful (even if modestly) to justify continued investment. This means fostering software and algorithm development on its platforms – an effort they are involved in (the Munich Quantum Software Stack, etc.), but results need to materialize (like high-impact research results from users of their systems).

In summary, IQM’s challenges are those of scaling up a complex hardware system under time pressure, while matching strides with global competitors. They must maintain qubit quality at scale, engineer new solutions for chip linking and control, and manage the execution of multiple large projects. The upside is that none of these challenges appear insurmountable – they are natural hurdles in the roadmap which IQM anticipated (their roadmap explicitly mentions strategies for many of these: QLDPC for efficiency, new cleanroom for better fab, etc.) If they can systematically tackle each issue, IQM could very well meet its roadmap, but any unforeseen blocker (e.g. a fundamental coherence limit, or a supply chain issue for fridge components) would test the resilience of their timeline. As with all in this race, the next few years will be the proving ground for IQM’s approach and its ability to overcome these challenges.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap