Industry News

IBM Unveils Condor: 1,121‑Qubit Quantum Processor

Yorktown Heights, N.Y., USA (Dec 2023) – IBM has announced “Condor,” a superconducting quantum processor with a record-breaking 1,121 qubits – the largest of its kind to date. Unveiled at the IBM Quantum Summit 2023, Condor marks the first quantum chip to surpass 1,000 qubits, a milestone many in the field have eyed as a crucial step toward practical quantum computing. The new processor, built on IBM’s heavy-hexagonal qubit architecture and cross-resonance gate technology, pushes the boundaries of scale in quantum hardware. With Condor, IBM more than doubles its previous qubit count record and sets a new high-water mark in the global race for quantum computing power.

This 1,121-qubit chip isn’t just a numbers game – it represents significant engineering breakthroughs. IBM reports Condor achieved a 50% increase in qubit density over prior designs, thanks to advances in fabrication and packaging. Fitting over a thousand superconducting qubits on a single slice of silicon required innovative 3D chip packaging and a mile of high-density cryogenic wiring inside the refrigerator. Despite its unprecedented scale, Condor’s performance (in terms of coherence times and gate fidelity) is said to be on par with its 433-qubit predecessor, Osprey, indicating that IBM managed to grow the processor’s size without a loss in quality. This feat – scaling up qubit count while maintaining performance – is viewed as an important “innovation milestone” for the industry.

Pushing the Frontier: Condor vs. Previous Quantum Processors

Condor’s debut comes on the heels of steady progress in superconducting quantum computing. In the past few years, IBM and others have been in a “qubit arms race,” steadily increasing qubit counts on a single chip. Condor emphatically breaks the 1,000-qubit barrier, dwarfing previous devices in this category:

  • IBM “Eagle” (2021) – 127 qubits. IBM’s first processor above 100 qubits, Eagle introduced a heavy-hexagonal (honeycomb) qubit layout and 3D wiring that made such scaling possibl​. IBM noted that Eagle was already too complex for classical computers to fully simulate, hinting at the onset of “uncharted computational territory.”
  • IBM “Osprey” (2022) – 433 qubits. IBM’s previous record-setter, Osprey more than tripled Eagle’s qubit count​. It demonstrated IBM’s roadmap in action and foreshadowed the 1,121-qubit Condor planned for the following year.
  • Google “Sycamore” (2019) – 53 qubits. Milestone for quantum supremacy, Sycamore performed a random-circuit sampling calculation in 200 seconds that was estimated to take 10,000 years on the best classical supercomputer of the time​. This 53-qubit chip proved that even a few dozen high-quality qubits can outperform classical machines on specific tasks.
  • USTC “Zuchongzhi 2” (2021) – 66 qubits (56 used). China’s answer to Sycamore, the Zuchongzhi 2 processor (led by Jian-Wei Pan’s team at USTC) executed an even more challenging random circuit sampling task—roughly 100× more complex than Sycamore’s benchmark​. Using 56 out of its 66 superconducting qubits, Zuchongzhi completed the task in about 1.5 hours, which would have taken an estimated 8 years on a classical supercomputer​. This demonstrated a strong quantum advantage and underscored how adding just a handful of qubits can exponentially increase quantum computational power.
  • USTC “Zuchongzhi 3.0” (2023) – ~105 qubits. Crossing 100 qubits, the latest Zuchongzhi chip (not far behind Osprey’s scale) reportedly reached 105 functional qubits. Early reports indicate its performance was on par with Google’s 100+-qubit “Willow” processor, reflecting a neck-and-neck international race in superconducting qubit development.

By these comparisons, Condor’s 1,121 qubits stand out as an order-of-magnitude leap beyond other superconducting chips of its era. The previous contenders – IBM’s 433-qubit Osprey, USTC’s 66 and 105-qubit Zuchongzhi prototypes, Google’s 50-qubit-range Sycamore – all demonstrated impressive capabilities, but none approached a four-digit qubit count. In fact, IBM’s new chip is nearly three times larger than any prior superconducting processor publicly disclosed​. This dramatic increase in scale doesn’t automatically equate to triple the computing power (since error rates and connectivity also factor in), but it signals that IBM has cleared major hurdles in manufacturability and design needed for scaling up quantum hardware.

Notably, IBM’s design strategy for Condor follows a lineage of “bird-named” processors all using a similar heavy-hexagonal qubit lattice. In this layout, each qubit is connected to at most 2 or 3 neighbors (forming a honeycomb-like pattern) rather than the 4 neighbors of a square grid​. This heavy-hex architecture was first adopted in IBM’s Falcon and Eagle chips to reduce crosstalk and errors by minimizing unwanted interactions​. Condor continues this approach – essentially a much larger honeycomb network of transmon qubits. The chip’s 1,121 qubits are arranged in a heavy-hex lattice 43 qubits across, compared to 27 across for Osprey and 15 for Eagle​. By extending the same 2D pattern, IBM preserved a proven design while multiplying the qubit count.

It’s also worth noting that Condor is a superconducting quantum processor, meaning it operates with qubits made from superconducting circuits (Josephson junctions) cooled to millikelvin temperatures. Other qubit technologies exist – such as trapped ions, photonics, and neutral atoms – and some have achieved impressive feats with fewer qubits. (For example, Jiuzhang, a photonic quantum computer in China, demonstrated quantum advantage with 76 and then 113 entangled photons in 2020-21​, albeit for specialized boson-sampling tasks rather than general-purpose computing.) But among programmable, circuit-based quantum computers, superconducting platforms have led the qubit-count race. IBM’s Condor firmly extends that lead, at least in raw numbers.

Technical Breakthroughs Enabling 1,121 Qubits

Building a chip as large as Condor required solving numerous engineering challenges. How did IBM pack over a thousand qubits onto one processor and actually make it work? Several key technical advancements underlie Condor’s design:

  • 3D Chip Packaging & Qubit Density – IBM leveraged advanced 3D packaging to stack control wiring and readout circuitry on different layers than the qubits​. In earlier processors, each qubit had dedicated control and readout hardware, which became unwieldy as qubit counts grew. Condor uses multiplexing techniques so that one set of electronics can manage multiple qubits. This innovation, introduced with Eagle, frees up physical space and allowed a 50% increase in qubit density on the Condor chip without compromising signal integrity. Essentially, IBM made the qubits smaller and closer together (with on-chip signal isolators to prevent interference​), fitting 1,121 qubits in a chip area only modestly larger than Osprey’s.
  • Heavy-Hex Lattice Topology – As mentioned, Condor continues IBM’s heavy-hexagon qubit layout. This choice reduces error rates by limiting each qubit’s neighbors, which cuts down stray coupling and crosstalk. The trade-off is that each qubit can interact directly with fewer partners, so implementing certain two-qubit gates or algorithms may require additional steps to route entanglement through the lattice. IBM has deemed this an acceptable trade, prioritizing qubit quality and yield at scale over all-to-all connectivity. The Condor chip’s honeycomb arrangement is thus a deliberate design to keep errors manageable even as qubit count skyrockets​.
  • Cross-Resonance Gates with Fixed-Frequency Qubits – Condor, like IBM’s prior superconducting qubits, uses fixed-frequency transmon qubits and entangles them using cross-resonance microwave pulses. In a cross-resonance (CR) gate, driving one qubit at the resonant frequency of a neighboring qubit induces an effective two-qubit interaction (a controlled-NOT operation). The advantage of this approach is that qubits don’t need to be tuned in frequency during operations, simplifying control for a large array of qubits. IBM has honed the CR gate technique over several generations, achieving high-fidelity two-qubit gates on heavy-hex lattices. Condor’s successful implementation of 1,121 qubits with CR gates validates that IBM’s fixed-frequency, microwave-driven gate scheme can scale to thousand-qubit levels. (By contrast, Google’s Sycamore used tunable qubit frequencies and a different gate scheme, while some others use tunable couplers – approaches that have their own pros/cons at scale.)
  • Cryogenic I/O and “Quantum Refrigerator” Innovations – Simply controlling and reading out 1,121 qubits is a herculean wiring task. IBM had to route over a mile of high-density flexible cabling inside the dilution refrigerator to connect the Condor chip with its control electronics. They also expanded the cryogenic hardware (IBM built a giant custom cryostat, sometimes dubbed the “super-fridge”, originally unveiled for the 433-qubit Osprey system) to physically accommodate the larger chip and all the wiring. Achieving this level of integration while maintaining millikelvin temperatures and low noise required new engineering in cabling, filtering, and packaging. Condor’s successful deployment shows that IBM can manage extreme cryogenic I/O scaling – a prerequisite for building even larger quantum machines.
  • Fabrication Yield and Qubit Uniformity – A quantum chip is only as good as the uniformity and yield of its qubits. Manufacturing superconducting qubits involves delicate nanofabrication (for Josephson junctions) and even minor defects can render qubits inoperative or too short-lived. By reaching 1,121 qubits on one chip, IBM had to demonstrate very high yield and consistent quality across a large die. Improvements in materials, fabrication processes, and chip design (like larger chip “laminate” size and better on-chip filtering) were implemented to ensure most of those 1,121 qubits meet performance specs. The fact that Condor’s coherence and gate fidelities stayed comparable to the 433-qubit device suggests IBM solved many variability issues and can scale up without significant per-qubit degradation.

Individually, each of the above advances is incremental, but together they enabled a quantum processor of unprecedented scale. Importantly, Condor is still a NISQ-era device – it does not incorporate error-correcting codes or fully solve the noise problem. Rather, IBM describes Condor as an exploratory platform that “solves scale” and informs future hardware design. In other words, Condor’s value is as a stepping stone: by building it, engineers learned how to manage large qubit systems and identified what will be needed for the next generation of quantum computers.

Toward Useful Quantum Computing – Why Condor Matters

Breaking the 1,000-qubit barrier carries symbolic and practical significance for the broader quest to achieve quantum advantage in real-world applications. Quantum advantage is reached when a quantum computer performs a task beyond the reach of classical computers in any reasonable time​​. Google’s 53-qubit Sycamore in 2019 was a seminal proof-of-concept, and subsequent devices like USTC’s 66-qubit Zuchongzhi 2 strengthened that claim with even harder tasks​. However, those demonstrations were specialized and not immediately useful for industry problems – they mainly showed that the quantum hardware was doing something very hard for a classical supercomputer.

IBM’s approach with Condor is a bit different. Rather than focusing on a single benchmark task, IBM is aiming for general-purpose capability at scale. A processor with over 1,000 qubits could in principle run much larger quantum circuits, tackling more complex algorithms (for chemistry, optimization, machine learning, etc.) than 50- or 100-qubit devices can. Each additional qubit doubles the size of the quantum state space, so moving from 433 qubits (Osprey) to 1,121 qubits (Condor) multiplies the potential state space by an astounding factor (~2(688) times larger). No classical supercomputer can brute-force simulate 1,121 qubits – that’s 21121 states – so Condor firmly enters a regime impossible to fully emulate classically. This opens the door for running experiments that have no known classical solution or simulation, which is key for discovering useful quantum algorithms.

That said, quantum utility – performing useful tasks better than classical computers – is not just about qubit count. Quality matters as much as quantity. A primary reason IBM and others have not simply thrown thousands of qubits at real-world problems yet is the noise and error rates: today’s physical qubits are error-prone and short-lived, limiting the size of circuits (number of operations) one can run before the result becomes garbage. IBM’s own roadmap acknowledges this, which is why alongside Condor’s scale, the company introduced the 133-qubit “Heron” chip focusing on error reduction. The Heron processors use a new design with fixed-frequency qubits and tunable couplers to dramatically cut crosstalk, achieving 3–5× better error rates than the earlier 127-qubit Eagle. Interestingly, IBM’s Quantum System Two – the next-generation quantum computing platform slated to go online in 2024 – will favor multiple Heron chips over a single Condor for running computations​​. In other words, while Condor proves that a 1,121-qubit device can be built, IBM appears more excited about using smaller, high-fidelity chips tiled together to reach practical quantum computing sooner​​.

The rationale is straightforward: A 1,121-qubit processor with the same error rate as a 433-qubit one doesn’t let you run deeper circuits – it just gives you more qubits to entangle, which is great for certain demonstrations but less useful if each qubit can’t reliably operate for long sequences. In contrast, a 133-qubit chip with 5× lower error rates can execute significantly more operations per qubit before decohering, enabling more complex algorithms even with fewer qubits. IBM is essentially hedging: Condor addresses the scale challenge, while Heron addresses the quality challenge. Both scale and quality are needed for quantum computers to solve meaningful problems. The ultimate goal is to combine them – e.g. by networking multiple medium-sized, low-error chips into one system – to achieve quantum advantage on practical tasks.

From an industry perspective, IBM Condor’s launch signals a new phase in the quantum computing race. IBM has staked a claim to the largest superconducting chip, but rivals are close on its heels. Google’s Quantum AI division has been pursuing quantum error correction and reportedly built a 72-qubit and then ~100-qubit device (Google’s 105-qubit “Willow” was reported in late 2024). In China, besides the USTC academic efforts (Zuchongzhi superconducting chips and Jiuzhang photonic machines), a commercial effort led by the Chinese Academy of Sciences and company QuantumCTek produced a 136-qubit and then a 504-qubit superconducting chip named “Xiaohong” by 2024​​. While Condor held the crown in 2023 as the largest gate-based quantum processor, it’s clear that a global competition is driving rapid improvements. This competition is generally positive for the field: it spurs innovation and investment, bringing us closer to the day when quantum computers transition out of the lab and into real-world use.

Broader Implications: A Step Toward Quantum Advantage and New Challenges

Condor’s achievement resonates beyond just IBM’s portfolio. For the research community, having access to a 1,121-qubit device (even if only in a limited capacity initially) offers a testbed for scaling up quantum algorithms and exploring phenomena like entanglement at unprecedented system sizes. It will help researchers learn how quantum circuits behave as we approach the thousand-qubit scale, and what new issues arise (in calibration, error crosstalk, readout, etc.) that weren’t apparent at a few hundred qubits. These lessons are crucial for guiding the design of future 10,000+ qubit systems.

In the context of cybersecurity, each quantum computing milestone naturally draws attention to the question: How close are we to breaking public-key cryptography? The consensus is that even 1,121 noisy physical qubits are nowhere near enough to crack RSA or other cryptographic algorithms – that feat likely requires millions of error-corrected qubits, far beyond current technology​​. However, the steady march from 50 to 100 to 1,000 qubits is a wake-up call. It underscores that quantum hardware is improving at an exponential pace, and it lends urgency to efforts in post-quantum cryptography (developing encryption methods safe against quantum attacks). Governments and enterprises tracking quantum developments will view Condor’s debut as evidence that more powerful quantum machines will materialize in the coming years, and that they should prepare their security infrastructure well in advance.

Meanwhile, industries such as pharmaceuticals, finance, and materials science are watching for signs that quantum computers might soon tackle useful tasks in optimization or simulation. IBM and others have been touting the idea of reaching “quantum utility” in the near term – solving specific business-relevant problems faster or better than classical computers by harnessing quantum methods. Condor by itself doesn’t deliver quantum utility yet (due to noise and lack of error correction), but it is a crucial enabler on the path toward that goal. Large NISQ processors like Condor can be used in hybrid quantum-classical algorithms and error mitigation techniques (for example, exploring quantum variants of machine learning models or simulating small molecular systems) that inch toward practical advantage. As quantum hardware scales, even noisy solutions might outperform classical ones for certain tasks via clever algorithm design. Each hardware leap expands the horizon of what problems can be attempted.

Finally, Condor’s introduction highlights a strategic pivot in how future quantum systems will be built. IBM’s roadmap suggests that rather than continuing to build ever-larger monolithic chips, the next step is modularity – linking multiple chips together in one system​​. In fact, IBM’s Quantum System Two is designed to house multiple smaller processor tiles connected via cryogenic interconnects​. Condor essentially pushed the single-chip approach to its feasible extreme (IBM’s engineers note the Condor die is “really big already” and further scaling this way becomes impractical​). Going forward, IBM and others plan to connect chips like tiles to scale to many thousands of qubits. Condor will thus inform how those modules can be integrated – for instance, testing the limits of fridge capacity, wiring, and control software for large qubit counts​​. In a sense, Condor is both an endpoint and a beginning: the end of one roadmap phase (monolithic scaling) and the beginning of quantum computing’s modular era.

Conclusion

IBM’s Condor processor is a landmark in quantum computing, representing the first 1,121-qubit superconducting quantum chip and the largest general-purpose quantum processor ever put forth. It builds on years of methodical progress – from 50-qubit demonstrations of quantum supremacy to 100- and 400-qubit engineering prototypes – and in doing so, Condor transitions the field into four-digit-qubit territory. This achievement is as much about engineering as it is about raw numbers: IBM demonstrated that they could scale up the complexity without letting the system implode under errors or engineering constraints. The lessons learned from Condor’s design and deployment will reverberate through the quantum research community, guiding how next-generation systems are built.

While Condor won’t instantly make quantum computers outperform classical ones on practical problems, it significantly closes the gap toward that reality. It underscores a broader trend: quantum hardware is rapidly maturing, and obstacles that once seemed daunting – like controlling 1,000+ entangled qubits – are being overcome. In combination with parallel advances in qubit fidelity, error mitigation, and software, the quantum ecosystem is steadily marching toward machines of true utility. Condor’s arrival is thus a beacon of progress: an encouraging sign that the long-promised power of quantum computing is coming into focus, one breakthrough at a time, as researchers strive to harness these 1,121 qubits (and beyond) for real-world impact.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven professional services firm dedicated to helping organizations unlock the transformative power of quantum technologies. Alongside leading its specialized service, Secure Quantum (SecureQuantum.com)—focused on quantum resilience and post-quantum cryptography—I also invest in cutting-edge quantum ventures through Quantum.Partners. Currently, I’m completing a PhD in Quantum Computing and authoring an upcoming book “Practical Quantum Resistance” (QuantumResistance.com) while regularly sharing news and insights on quantum computing and quantum security at PostQuantum.com. I’m primarily a cybersecurity and tech risk expert with more than three decades of experience, particularly in critical infrastructure cyber protection. That focus drew me into quantum computing in the early 2000s, and I’ve been captivated by its opportunities and risks ever since. So my experience in quantum tech stretches back decades, having previously founded Boston Photonics and PQ Defense where I engaged in quantum-related R&D well before the field’s mainstream emergence. Today, with quantum computing finally on the horizon, I’ve returned to a 100% focus on quantum technology and its associated risks—drawing on my quantum and AI background, decades of cybersecurity expertise, and experience overseeing major technology transformations—all to help organizations and nations safeguard themselves against quantum threats and capitalize on quantum-driven opportunities.
Share via
Copy link
Powered by Social Snap