Table of Contents
(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)
Introduction
IBM has laid out one of the most detailed and aggressive quantum computing roadmaps in the industry. Over the past few years, IBM Quantum has consistently hit its interim milestones, expanding both the scale of its processors and the sophistication of its approach to quantum computing. As a long-time pioneer in quantum computing, IBM was the first to put real quantum hardware on the cloud and has steadily built a global ecosystem (IBM Quantum Network) around its machines. Now, IBM’s focus is squarely on scaling up towards practical, fault-tolerant quantum computers by the end of this decade. Key highlights include ambitious qubit count milestones, a pivot toward error-corrected qubits, and an integration of quantum and classical computing into “quantum-centric supercomputing” systems.
Milestones & Roadmap
IBM’s hardware milestones form a clear trajectory of rapid scaling in qubit counts and architectural complexity. In 2021, IBM broke the 100-qubit barrier with the 127-qubit Eagle processor – the first quantum chip of that scale. This was followed in 2022 by Osprey, a 433-qubit processor that quadrupled the qubit count and pushed the limits of single-chip design. At the IBM Quantum Summit in late 2023, IBM unveiled Condor, the world’s first 1,121-qubit quantum processor. Condor marked a major leap beyond the 1,000-qubit threshold, featuring a honeycomb (heavy-hex) qubit layout on a large-area chip and over a mile of high-density cryogenic wiring inside a single fridge. Each of these processors not only increased qubit count but also informed new techniques for scaling (Condor, for example, achieved a 50% increase in qubit density and improvements in fabrication yield over Osprey).
Looking ahead, IBM’s roadmap becomes even more modular. In 2024, IBM demonstrated a 462-qubit Flamingo processor with a built-in quantum communication link – the first step toward connecting multiple chips in a single system. In 2026, IBM plans to introduce Kookaburra, a 1,386-qubit multi-chip processor. Crucially, Kookaburra is designed to link three such chips via chip-to-chip couplers and communication links, forming a combined 4,158-qubit quantum system. In other words, IBM Quantum System Two (IBM’s new cryogenic hardware platform for modular QPUs) will demonstrate a >4,000-qubit system by interconnecting three Kookaburra processors. This modular approach – scaling out with multiple smaller chips instead of one giant chip – is how IBM intends to grow beyond the physical limits of a single die.
Beyond 2025, IBM’s roadmap projects a path to true quantum supercomputers by the early 2030s. IBM has explicitly extended its roadmap to 2033 and beyond. In 2026-2027, chips like Loon, Cockatoo, etc., will add new capabilities (e.g. higher connectivity within chips and entangling links between modules) to pave the way for a fault-tolerant architecture. By 2028-2029, IBM plans to debut IBM Quantum Starling – a large-scale, fault-tolerant quantum computer constructed at IBM’s Poughkeepsie site. Starling is expected to have ~200 logical qubits (encoded via quantum error correction) comprising on the order of 10,000 physical qubits, and be capable of running quantum circuits with 100 million gates. This would be the first machine of its kind: a quantum system that can execute very deep circuits with error correction in place, achieving useful computational breadth.
IBM doesn’t stop at Starling. Eventually, IBM envisions building a “quantum-centric supercomputer” called Blue Jay by 2033, scaling up to 2,000 logical qubits (likely ~100,000 physical qubits or more) and able to execute billion-gate quantum programs. In practical terms, this implies a network of many modular quantum processors linked together. (One analysis of IBM’s roadmap notes that connecting four modules of 4,158 qubits each could yield ~16,632 physical qubits, but IBM’s longer-term vision is on the order of 100,000+ qubits across many modules by 2033.) Realizing this will involve linking multiple IBM Quantum System Two units and perhaps larger fourth-generation cryogenic systems. IBM’s quantum-centric supercomputing concept entails quantum processors tightly woven with classical CPUs/GPUs into a single compute fabric – essentially hybrid clusters where quantum accelerators tackle parts of problems that classical machines cannot. IBM’s Vice President of Quantum, Jay Gambetta, described it as weaving QPUs together with CPUs/GPUs, analogous to how modern supercomputers integrate GPUs for AI workloads.
In summary, IBM’s roadmap shows a clear scaling trajectory: break the 1,000-qubit mark (Condor, 2023); reach ~4,000 qubits with modular chips (Kookaburra system, 2025); implement error correction on a ~10k-qubit machine (Starling, 2029); and scale to industry-changing sizes (~100k qubits, Blue Jay by 2033). Notably, IBM has a track record of delivering on its roadmap announcements so far, which lends credibility to these targets.
Focus on Fault Tolerance
IBM’s latest roadmap update explicitly pivots toward fault-tolerant quantum computing – the ability to run long algorithms with errors actively corrected on the fly. IBM states that unlocking the full promise of quantum computing “will require a device capable of running larger, deeper circuits with hundreds of millions of gates on hundreds of qubits… [and] capable of correcting errors and preventing them from spreading – in other words, a fault-tolerant quantum computer.” To that end, IBM has published a comprehensive architecture for fault tolerance based on new quantum error-correcting codes and modular design. By 2029, IBM aims to deliver the IBM Quantum Starling system: ~200 logical qubits encoded using quantum error correction (QEC), supporting circuits of 100 million gates. This would represent a few hundredfold increase in the scale of quantum programs that can run accurately compared to today’s devices. IBM is building Starling at its Poughkeepsie Quantum Data Center in New York, highlighting the significance as a milestone system.
Achieving fault tolerance requires major breakthroughs in QEC technology. Historically, the go-to QEC scheme (surface codes) requires thousands of physical qubits per logical qubit, making the overhead daunting (on the order of millions of physical qubits to do something like Shor’s algorithm on large numbers). IBM is pursuing a more efficient class of codes known as quantum Low-Density Parity Check (LDPC) codes. In 2024, IBM researchers published a Nature paper introducing a “bivariate bicycle” qLDPC code that encodes 12 logical qubits in 144 data qubits (plus 144 ancilla for checks). This code can achieve error suppression comparable to surface code but with ~10× fewer qubits required. The increased efficiency comes from allowing non-local connections in the code (qubits that are far apart can still be part of the same check), which IBM implements by using long-range couplers on the chip (even in a 2D layout, effectively emulating a 3D connectivity topology). IBM’s architecture uses these LDPC codes (the “bicycle” code family) as the basis for its fault-tolerant quantum memory.
Another critical piece IBM has tackled is the real-time decoder. In a fault-tolerant quantum computer, errors must be detected and corrected faster than they accumulate. This means processing syndrome measurements through a classical decoder in (near) real-time to inform corrections. IBM recently designed a high-speed, efficient decoding algorithm called Relay BP (belief propagation) that can run on an FPGA/ASIC with significantly reduced complexity. According to IBM, this decoder is accurate, fast, flexible, and compact – achieving a 5×-10× speedup over prior decoders and amenable to on-chip or local FPGA implementation. In June 2025, IBM released a paper detailing this decoder, demonstrating it can keep up with the QEC cycle without requiring a large external supercomputer. Eliminating the need for co-located HPC resources for decoding is a big step, as it simplifies the system architecture for a scalable fault-tolerant machine.
IBM’s roadmap through 2028 includes progressively integrating these QEC advances: by 2025, the Loon processor will introduce new crosstalk-avoiding couplers (c-couplers) to increase connectivity for LDPC codes; by 2026, Kookaburra will serve as the first QEC-enabled module (storing info in an LDPC code with a logical processing unit to manipulate those logical qubits); by 2027, Cockatoo will demonstrate entanglement between two QEC-enabled modules, proving they can interact quantumly. All these lead up to Starling in 2028-29, which will combine multiple modules, error-corrected memory, logical operations (including magic-state distillation for non-Clifford gates), and the fast decoder to realize a fully fault-tolerant computing system.
IBM is openly confident in this plan. “Recent revisions to that roadmap project a path to 2033 and beyond, and so far, we have successfully delivered on each of our milestones,” IBM noted in mid-2025. Executives have stated that IBM may be the only organization on track to run useful programs on hundreds of logical qubits by the end of the decade, essentially claiming a leadership position in the race to fault tolerance. Indeed, IBM expects to achieve quantum advantage (solving some problems more efficiently than any classical computer) by 2026 on interim systems, and to have those “advantage-era” algorithms port seamlessly onto the fault-tolerant Starling by 2029. In other words, IBM is ensuring that the software and use-cases developed in the next few years will carry over to the error-corrected quantum computers when they arrive – an important consideration for early adopters.
To summarize this focus: IBM’s quantum effort is now centered on making logical qubits a reality. By the end of the decade, IBM aims to deliver on the order of 108 quantum operations on ~200 logical qubits in a single run. If successful, that machine (Starling) would be a landmark: the first large-scale quantum computer where error-corrected qubits are doing substantial, algorithmically meaningful work. It would also validate IBM’s strategic bet on LDPC codes and modular architectures over other approaches like monolithic surface-code-based chips. IBM’s own researchers describe this plan as “attempting to rewrite the rules of computing in just a few years” and acknowledge it will require solving “incredibly tough engineering and physics problems” along the way. Nevertheless, backed by steady progress and a multi-disciplinary team (device engineers, theorists, software developers, and an active user community), IBM is pushing aggressively toward the fault-tolerant frontier.
CRQC Implications
One frequent question is how IBM’s planned machines relate to cryptographically relevant quantum computing (CRQC) – i.e. the ability to break strong encryption like RSA. It’s generally estimated that breaking modern public-key encryption (RSA-2048, etc.) will require on the order of thousands of logical qubits running error-corrected algorithms like Shor’s for a sufficient number of operations. IBM’s 2029 goal of ~200 logical qubits falls short of that threshold. For example, a recent analysis suggests that factoring a 2048-bit RSA key would require roughly ~2N logical qubits (for N-bit RSA), meaning on the order of 4,000 logical qubits for RSA-2048. That assumes a straightforward implementation of Shor’s algorithm. Even with algorithmic improvements, the requirements are still much higher than 200 logical qubits – a 2023 study by Google Quantum AI found that with optimized algorithms and architectures, breaking RSA-2048 could potentially be done in under a week with under 1 million physical qubits, which corresponds to on the order of ~1,000 logical qubits (given error-correction overheads). Thus, IBM’s ~200 logical qubit Starling system would not immediately threaten RSA or other similar cryptographic schemes. It’s below the “critical mass” needed for full CRQC by at least an order of magnitude.
However, IBM’s efforts are certainly a major stepping stone toward that regime. Achieving a few hundred logical qubits with fault tolerance would demonstrate many of the ingredients needed for a cryptographically relevant quantum computer, just at smaller scale. From there, scaling up to thousands of logical qubits is largely an engineering challenge of adding more modules and more physical qubits – something IBM’s roadmap explicitly plans for in the 2030s. IBM itself acknowledges that realizing the full potential of quantum computing may require “hundreds of thousands, maybe millions of high-quality qubits”. The company is planning for that kind of scale in the long run, even if it doesn’t expect to reach cryptography-shattering power by 2029. The 2033 Blue Jay vision of ~2,000 logical qubits would put IBM much closer to the CRQC realm, and beyond that IBM foresees essentially unlimited scaling via quantum data centers.
It’s also worth noting that IBM’s public communications around its roadmap emphasize useful applications like scientific simulations, AI, and optimization – not explicitly “we’re going to break RSA.” IBM talks about quantum advantage in areas like complex chemistry simulations, modeling new materials, or hard optimization problems for industry. The implicit understanding, though, is that once fault-tolerant quantum computing is achieved, any problem that is tractable to quantum algorithms (including cryptography) can eventually be tackled with more hardware. IBM’s near-term goal is to demonstrate practical quantum advantage well before reaching the CRQC threshold, so that quantum computing is delivering value (and motivating investment) without needing to immediately crack cryptosystems. In parallel, the world is moving toward quantum-safe encryption in anticipation of future large quantum computers. IBM is actually involved in post-quantum cryptography efforts as well (developing algorithms that resist even quantum attackers), underscoring that breaking encryption is not the stated aim of their quantum program. Nonetheless, IBM’s march toward fault tolerance is one of the clearest paths to eventually achieving a cryptographically relevant quantum computer, likely within the next 10-15 years if progress continues. In summary: IBM’s 2029 machine won’t by itself break RSA-2048, but it significantly closes the gap, and IBM’s roadmap beyond 2029 shows a continued scaling that brings CRQC capabilities into sight.
Modality & Strengths/Trade-offs
IBM has unwaveringly focused on superconducting transmon qubits as its technology modality. These are qubits made from Josephson junction circuits on silicon chips, operated at millikelvin temperatures. One strength of this approach is speed: Superconducting qubits have very fast gate times (typically on the order of 10s of nanoseconds for single-qubit operations and ~100-200 ns for two-qubit entangling gates). This means quantum circuits can be executed quickly, reducing susceptibility to decoherence per operation. IBM has leveraged this speed to run circuits with thousands of gate operations within the short coherence window of the qubits. In fact, IBM recently demonstrated the ability to run circuits with 5,000 two-qubit gates successfully on their 127-qubit and 133-qubit systems. This was part of IBM’s “100×100 Challenge” (100 qubits, depth-100 circuits) which they accomplished in 2024 by using improved hardware (the Heron processor) and software stack to execute a 100-qubit, depth-100 circuit in under 24 hours. Such fast gating and execution capabilities give superconducting qubits an edge in near-term performance and have allowed IBM to push quantum circuit complexity to the “utility scale” (beyond what can be exactly simulated classically).
IBM’s design philosophy for superconducting qubits also emphasizes manufacturability and connectivity. They use a 2D lattice of qubits on chip, and notably, IBM introduced the “heavy-hex” topology in 2020-2021 (starting with their 65-qubit Hummingbird and 127-qubit Eagle chips). The heavy-hex lattice connects each qubit to at most 2 or 3 neighbors (like a hexagonal grid with certain edges removed) rather than 4 neighbors as in a square grid. This slightly reduced connectivity was a conscious trade-off to minimize cross-talk and frequency collisions between qubits – two significant sources of error. The result was improved gate fidelities: by 2022, IBM reported that the majority of two-qubit gates on their Falcon r10 processors were achieving ~99.9% fidelity (only 1 error in 1000 operations). Achieving 99.9% two-qubit fidelity is an important milestone, as error correction overhead becomes significantly more manageable once error rates are in the 10^-3 (or better) range. IBM’s heavy-hex architecture, combined with tunable couplers (which they added to suppress undesired interactions), has virtually eliminated certain cross-talk errors and enabled these high fidelities. For instance, the newer 133-qubit Heron processor uses fixed-frequency transmon qubits with tunable coupling elements, and showed a 3-5× performance improvement over the earlier Eagle processor, with much reduced spectator errors. This indicates that IBM’s focus has not only been on more qubits, but also on better qubits – improving coherence times, gate fidelities, and stability with each generation.
Of course, superconducting qubits come with trade-offs. They must be operated at deep cryogenic temperatures (~15 millikelvin) in large dilution refrigerators, which imposes overhead in infrastructure. IBM has world-class cryo-engineering (as evidenced by the striking IBM Quantum System One and Two cryostats), but cooling and maintaining thousands of qubits is non-trivial. Additionally, transmon qubits have relatively short coherence times (currently on the order of 100 microseconds T1/T2 for IBM’s best devices, sometimes up to a few hundred microseconds for certain isolated qubits). This means the qubits “forget” their quantum state fairly quickly, necessitating fast gate operations and/or error correction to do long algorithms. While 100 μs is good by superconducting standards (and has improved gradually), it still implies that without error correction, circuits can only be a few thousand operations long before noise dominates. This is why error correction is essential – and also why its overhead (using many physical qubits to extend coherence) is a burden. IBM’s current physical qubits, with ~99.9% fidelities and ~100 μs coherence, might require on the order of a few hundred physical qubits per logical qubit for the targeted logical error rates using LDPC codes. IBM is attacking this on multiple fronts: materials science to improve coherence (reducing two-level system defects in junctions and substrates), better qubit packaging to reduce loss, and smart software techniques like dynamical decoupling and error mitigation. Each new processor IBM releases tends to show incremental improvements in coherence and gate quality, indicating a strong learning curve in its hardware development.
Another strength of IBM’s approach is the integration with classical computing and software. IBM has been a leader in developing the software stack (Qiskit, runtime services, circuit transpilers, etc.) to make quantum processors usable in concert with classical resources. Their concept of quantum-centric supercomputing underlines that a quantum computer will rarely work alone – it will be an accelerator tightly coupled to classical preprocessors and postprocessors. IBM’s Qiskit Runtime and Quantum Serverless architecture allow portions of an algorithm to run on classical nodes (for example, iterative or parallel tasks) and seamlessly invoke quantum circuits when needed. They have even implemented features like dynamic circuits (feed-forward of measurement results within a circuit run) and concurrent job execution, which improves the effective computational power of their systems. The emphasis on a robust software ecosystem is a strength because it enables hybrid algorithms (like variational algorithms, circuit knitting techniques, etc.) that are believed to be the first to show practical quantum advantage. IBM is ensuring that as the hardware scales, the software will harness it efficiently – for example, by introducing a circuit layer operations per second (CLOPS) metric to measure speed (IBM achieved >150,000 CLOPS on their systems by 2024), and by using AI to optimize circuit compilation (their AI-driven transpiler can reduce gate counts by 20-50%).
In terms of comparative modality trade-offs: Superconducting qubits (IBM’s choice) vs other approaches (ions, photonics, etc.) each have pros/cons. IBM’s superconducting qubits enjoy fast gates and leveraging of semiconductor fab techniques to potentially mass-produce chips. They are currently the most mature in terms of multi-qubit demonstrations (IBM and Google have both shown 50-100 qubit systems with high coherence, and IBM now 1000+ qubit). The trade-off is the dilution fridge and scaling overhead – control electronics, microwave lines, and cooling power become complex as qubit counts grow. IBM has addressed this by designing new cryo-control hardware that can handle many qubits (the IBM Quantum System Two is built to support the much larger qubit counts of coming processors, using novel signal multiplexing and cryogenic attenuation setups). Another trade-off is that fixed-frequency transmons need calibration and can suffer from frequency crowding; IBM mitigates this with tunable couplers and careful frequency allocation for each qubit. Alternatives like trapped ions have long coherence but very slow gates; photonics have ease of distribution but no straightforward two-qubit gate without probabilistic methods, etc. IBM’s superconducting approach is arguably the most industrialized: it has already delivered commercial quantum systems (IBM Quantum System One units are deployed in multiple countries and institutions), and incremental improvements are steadily being rolled out.
To summarize IBM’s modality: Superconducting qubits (transmons) with heavy-hex lattices and modular quantum processor interconnects. Strengths include fast operations, compatibility with modern fab methods (which enabled IBM to scale from 5 to 127 to 433 to 1121 qubits in a few generations), and a proven ability to integrate with a classical computing stack. The main weaknesses are environmental demands (ultra-cold temps, shielding) and noise requiring error correction overhead. IBM’s ongoing R&D – from improving physical qubit quality to developing better error correction codes – is geared to maximize the strengths and minimize the weaknesses of this platform. As IBM often points out, they’ve improved “scale, quality, and speed” in tandem: adding qubits, raising fidelities (quantum volume doubling repeatedly), and increasing runtimes (CLOPS) simultaneously, rather than focusing on only one metric.
Track Record
IBM’s track record in quantum computing is arguably second to none in terms of steady delivery on a public roadmap. Since announcing its quantum roadmap in 2020, IBM has hit every major milestone on schedule. This includes the hardware milestones already discussed: Eagle (127 qubits in 2021), Osprey (433 qubits in 2022), and Condor (1,121 qubits by end of 2023). Each of these was delivered as promised. For instance, Eagle was the first chip that crossed the line where classical simulation became unreliable for exact results, Osprey quadrupled that, and Condor – unveiled at the Quantum Summit 2023 – was the first chip to breach 1000 qubits, something IBM had targeted for 2023 and indeed accomplished. IBM not only built these processors but also integrated them into working systems (IBM Quantum System One, etc.) that real users and researchers can access via the cloud. IBM now has dozens of quantum systems online, including machines hosted for partners (e.g., in Germany, Japan, Canada, etc.) – demonstrating an ability to deploy quantum computers outside the research lab environment.
Moreover, IBM has shown continuous improvement in quantum performance metrics beyond raw qubit count. A notable IBM-introduced metric is Quantum Volume (QV), which combines qubit number and fidelity into the size of the largest random circuit the computer can successfully implement. IBM doubled its record QV six times in five years, reaching QV = 256 by 2022 and reportedly higher after that. IBM also introduced the CLOPS speed benchmark (circuit layers per second) to quantify how fast their systems can execute layers of gates – and this saw a 100× improvement (from ~1,400 to 140,000+ CLOPS) between 2021 and 2023 thanks to software and hardware upgrades. These improvements are holistic: IBM’s progress comes from a “full-stack” approach (better hardware, better error mitigation, better compilers, etc.), not just from adding more qubits.
One illustrative achievement of IBM’s track record is the “100×100 Challenge” mentioned earlier. At the IBM Quantum Summit 2022, the company challenged itself and the community to run a 100-qubit circuit with depth 100 (10,000 two-qubit gate operations) in a day. In 2024, IBM announced they met this challenge – in fact, they ran a 100-qubit, 100-depth circuit (roughly 5,000 two-qubit gates in that particular instance) accurately on the IBM Quantum Heron processor, within hours. This was done by the second revision of Heron (156 qubits in heavy-hex with error mitigations) and improvements in the Qiskit runtime and compilation (e.g. using parametric circuits, parallelized executions, and fast feedback). Achieving this “utility-scale” circuit (one that couldn’t be brute-forced by classical means) demonstrated that IBM’s machines had entered a new regime of capability. Indeed, in 2023 IBM and UC Berkeley performed a 127-qubit circuit experiment that produced results beyond the reach of exact classical simulation (sometimes dubbed a quantum utility experiment, distinct from random circuit supremacy experiments). These milestones show IBM’s consistent forward momentum not just in theory but in practical demonstrations on real hardware.
IBM also has a strong track record in publishing and peer-reviewed results, adding credibility. For example, IBM’s 2023 Nature paper on their new QEC code, their 2024 Science paper on effective error mitigation, and many others. IBM tends to back up roadmap claims with scientific publications (often co-authored with academic partners), which are then implemented on their devices. This transparency helps the broader community trust IBM’s claims. Executives like Dr. Dario Gil and Dr. Jay Gambetta frequently highlight that IBM “does what it says”. As evidence: “we have already delivered on the previous promises of our roadmap… and if we continue delivering along our roadmap, then we will realize fault-tolerant quantum computing on time,” IBM wrote in mid-2025.
Another aspect of IBM’s track record is the IBM Quantum System One – the world’s first integrated quantum computing product, unveiled in 2019. IBM System One is a beautifully engineered package with a cryostat, control electronics, and shielding in a standalone unit. IBM has installed System One machines for clients like Fraunhofer (Germany), University of Tokyo (Japan), Cleveland Clinic, and others, showing an ability to deliver quantum systems as a service or on-premises. In 2023, IBM launched Quantum System Two, the next-generation platform designed for modularity and higher qubit counts. The first System Two became operational at IBM’s lab in 2023, featuring three Heron processors linked for parallel execution. This demonstrates IBM’s prowess in building not just chips but the entire controlled environment needed for quantum computation at scale.
In summary, IBM’s track record is marked by:
- Regularly hitting roadmap targets (qubit counts and new technologies when promised).
- Advancing performance metrics (quantum volume, CLOPS, circuit depth) year over year.
- Delivering systems to users – IBM’s quantum cloud has been operational since 2016 and now supports a community of hundreds of thousands of users and dozens of organizations.
- Publishing key breakthroughs in top journals, underpinning the technical roadmap with scientific validation.
All of this underlies IBM’s reputation as a frontrunner in quantum computing. It’s often noted that while some companies keep their plans under wraps, IBM makes bold claims and then actually meets them, which is not trivial in this challenging field.
Challenges
Despite IBM’s successes, enormous challenges remain on the road to fully functional large-scale quantum computers. IBM itself acknowledges that exploring this “uncharted territory” means encountering fundamental engineering and physics hurdles – “Following our roadmap will require us to solve some incredibly tough engineering and physics problems”, as IBM stated plainly. Some of the key challenges include:
1. Modular Integration and Interconnects: IBM’s strategy of modular scaling (linking multiple chips and multiple cryostats) introduces the challenge of maintaining high-fidelity operations between chips. On a single chip, qubits are connected by short superconducting resonators with carefully calibrated couplings. But to connect chips, IBM is developing chip-to-chip couplers (for within the same fridge) and quantum communication links (for longer range between fridges). For example, the 2025 Kookaburra system will use chip-to-chip microwave couplers to link 3 chips into one logical processor. By 2027+, IBM plans to use “L-couplers” (likely microwave resonators bridging chips) and even microwave-to-optical converters to send quantum states between modules over fiber. Every time you extend the system, you risk additional loss and noise. Early demonstrations (IBM hinted at linking three smaller Flamingo chips by 2024 as a test) will likely have lower fidelity for inter-module gates. Managing this means the software/compilers might preferentially keep entangling operations local to one chip, or use error mitigation for cross-chip operations. Ensuring that multi-chip (and multi-module) systems behave like a single big computer is a non-trivial challenge IBM must solve through a combination of hardware (low-loss couplers, perhaps quantum repeaters for longer links) and software (intelligent job routing, etc.).
2. Qubit Coherence and Error Rates: Although IBM’s qubit fidelity is high, to reach true fault tolerance the error rates must drop even further or be handled with huge redundancy. IBM’s current two-qubit error ~0.1% (99.9% fidelity) is at the threshold where error correction starts to work, but the logical error rate after encoding still might be, say, 1e-3 per operation with a code of distance ~11 or so. To perform 100 million operations (Starling’s goal) with, say, 99% success probability, the logical error per operation needs to be on the order of 1e-10 or smaller. That implies either much better physical qubits or very large code distances (and thus many physical qubits per logical). IBM will need to continue improving coherence times (perhaps via better materials or 3D integration to isolate qubits from sources of noise) and gate fidelities (through calibration, pulse engineering, and reducing crosstalk even further). Even minor issues like leakage or crosstalk that were negligible at 5,000 gate operations could become significant over 100 million operations. IBM’s research into mitigating two-level system defects (e.g., Heron R2 included new filtering to reduce spurious resonances) is one example of the painstaking engineering needed to incrementally push physical qubit performance. There is also the challenge of scale in control electronics: Condor already required over 13,000 control wires into the fridge and clever multiplexing to handle 1,121 qubits. Scaling to 10,000+ qubits might require new control chip architectures (maybe cryo-CMOS controllers mounted near the qubits to reduce wiring). IBM’s System Two design is meant to address some of this by providing a larger, more scalable wiring infrastructure, but it’s still a major undertaking to deliver signals to, and readouts from, so many qubits without introducing noise.
3. Real-time Error Correction Systems: IBM has designed a fast decoder algorithm, but implementing it in hardware and integrating it into the feedback loop is challenging. The decoder must receive a firehose of syndrome data from potentially thousands of measurements every few microseconds, process it, and output corrections to be applied back to the qubits – all with sub-microsecond latency if possible. IBM’s solution involves FPGAs/ASICs running the Relay-BP decoder locally. But designing and fabricating those custom decoder chips, then co-locating them with the quantum control system, is a complex project. It’s essentially building a special-purpose classical supercomputer (for decoding) that operates synchronously with the quantum computer. Any latency or processing bottleneck could cause a backlog and let errors slip by uncorrected. IBM’s papers suggest they’re confident this is feasible with existing technology (they emphasize the decoder is efficient enough to avoid needing large HPC servers). Still, this is cutting-edge engineering that hasn’t been demonstrated at full scale in a real machine yet. The “Starling proof-of-concept” in 2028 is supposed to showcase the decoder in action on a smaller scale, which will be a critical milestone.
4. Cooling and Infrastructure: As IBM scales to thousands of qubits, the size and complexity of the cryogenic systems grows. IBM System Two is 3.3 meters tall and 6.7 meters wide (roughly 12 ft x 22 ft), and houses multiple cryostats and control racks. Future quantum data centers may have many such units. Managing the heat load (from control electronics, etc.) and maintaining milliKelvin temperatures across large combined systems is non-trivial. Vibration, electromagnetic interference, and even cosmic rays (which can momentarily decohere many qubits at once) become concerns in a data center environment. IBM will have to implement shielding and error budgeting for such events (e.g., resetting error correction if a high-energy event causes a burst of errors). The cryogenic engineering challenge also includes reliability – keeping a fridge running continuously with minimal downtime. IBM has experience here, as their systems run on the cloud typically with high availability, but scaling up will stress those capabilities.
5. Software and Algorithm Adaptation: On the software side, IBM needs to ensure that compilers, schedulers, and algorithms can effectively use a machine with heterogeneous error rates (e.g., maybe cross-chip gates are 5× noisier than on-chip gates). This might involve clever compilation to minimize use of “expensive” operations, or dynamic routing of qubits. IBM is likely developing software to decide when to use which module, how to lay out logical qubits across multiple chips, etc. Additionally, while IBM expects quantum advantage by 2026 on error-mitigated circuits, ensuring that those algorithms scale to the fault-tolerant hardware by 2029 is another challenge. Essentially, IBM doesn’t want a gap where they build a great FT computer but have no software that takes full advantage of it. They are mitigating this by co-developing applications (for chemistry, machine learning, etc.) with partners now, on smaller machines, and testing them on ever larger circuits.
IBM’s candid recognition of these challenges is actually a positive sign. They’ve said “We may be on track, but it isn’t easy… We’re attempting to rewrite the rules of computing in just a few years”. Notably, IBM also expressed that they are “feeling pretty confident” because of the team and ecosystem they’ve built around these problems. They cite their agile hardware development process – new iterations of chips every few months – as a way they learn and improve quickly. Also, by partnering widely (universities, national labs, startups in the IBM Quantum Network), IBM is effectively crowd-sourcing some of the innovation and application development needed.
In short, IBM’s challenges range from technical (physics) – like qubit coherence, wiring, couplers – to systemic (engineering) – like building a data center for QCs and integrating classical control – to algorithmic – making sure the complexity delivered is usefully employed. None of these are small tasks. However, IBM’s past performance and resource commitment (IBM has invested heavily, with a large dedicated quantum research division) suggest that if anyone can solve these, IBM is among the favorites. Overcoming these challenges is exactly what the remainder of the decade is scheduled for in IBM’s roadmap. Each year from 2023 to 2029 has “checkpoints” (e.g., demonstrate multi-chip coupling by 2024, demonstrate logical qubits by 2025-26, etc.) that address these risks one by one. We should expect incremental progress rather than one big leap – but IBM’s style so far has been steady and cumulative, which is well-suited to tackling hard engineering problems.