IBM and Cisco Want to Network Fault‑Tolerant quantum computers
IBM and Cisco’s joint announcement this week is easy to misread as another “quantum + internet” headline. It isn’t. The two companies are laying out a step‑by‑step program to turn stand‑alone fault‑tolerant machines into a fabric: first a proof‑of‑concept linking multiple fault‑tolerant computers within five years, then a broader, distributed network in the early 2030s, and – if the physics and engineering behave – a fledgling “quantum computing internet” toward the late 2030s. That’s more than ambition; it’s a system architecture with timelines, components, and explicit research gaps.
IBM’s own blog frames the idea in one sentence I think is spot‑on: “Fault‑tolerant quantum computing may be our present goal, but it’s just part of the IBM vision for the future of computing.” The company sketches a progression that will feel familiar to anyone who watched mainframes give way to clusters and, eventually, the internet: first you get a credible single fault‑tolerant system (IBM’s Starling program), then you cable modules together across short distances inside a data center, and finally you extend the links across buildings and metros. At each step, the point isn’t just more qubits; it’s connected qubits.
The connective tissue here is a device IBM calls the quantum networking unit, or QNU. If a quantum processor (the QPU) is your server, the QNU is its network interface – an adapter that converts the “stationary” microwave‑domain qubits living on a superconducting chip into “flying” qubits you can move over a link, and then back again on the other side. For short hops, those links can sit inside or between cryostats; for building‑scale and metro‑scale distances, you need a much nastier ingredient: microwave–optical transducers that faithfully translate quantum states from the cryogenic microwave world into telecom photons that can survive fiber, and back again, with almost no extra noise. IBM and Cisco call out those transducers, plus the software stack above them, as the critical inventions for their first demonstration around 2030.
Cisco’s role is exactly where you’d expect it: the network. Over the past year the company has been assembling a quantum‑networking portfolio that looks less like a moonshot and more like the early scaffolding of a product line: a quantum network entanglement chip that generates entangled photon pairs at telecom wavelengths (so it plays nicely with existing fiber), a dedicated quantum lab in Santa Monica, and control‑plane software designed to distribute entanglement on demand and reconfigure paths with sub‑nanosecond timing. In this collaboration, Cisco’s fabric is meant to ferry entanglement to whatever QNUs the algorithm needs, while IBM focuses on the error‑corrected compute on each end. It’s scale‑out, not just scale‑up.
If you’ve followed IBM Quantum for a while, you’ll recognize why this particular announcement is worth taking seriously. The group has a frustrating habit – for rivals – in that it tends to ship what it puts on slides. The milestone cadence is public and (so far) surprisingly reliable: Eagle at 127 qubits in 2021; Osprey at 433 in 2022; Condor crossing 1,000 qubits in 2023; and the Heron architecture, which deliberately pivoted from raw counts to lower error rates and kicked off IBM Quantum System Two. Those are not rumors; they’re machines you can point to.
The next act is fault tolerance. IBM’s Starling program aims to deliver the first large‑scale, fault‑tolerant system by 2029, capable of ~200 logical qubits and on the order of 100 million logical operations – enough to make distributed experiments meaningful. IBM has even published the intermediate waypoints: modular processors, upgraded decoding, and the hardware plumbing needed to get from today’s “utility‑scale” machines to error‑corrected ones. Cisco, for its part, is already talking about the software and control needed to stitch multiple such systems together into something an application developer can treat as one resource. I read this as two mature teams picking different hard problems and meeting in the middle.
The IBM blog puts a concrete near‑term stake in the ground: “Our first milestone will be entangling a pair of cryogenically separated quantum processors within the next five years.” That aligns with the press release’s plan to demonstrate a network that can run joint computations across tens to hundreds of thousands of qubits by ~2030, and with separate work IBM is doing with the DOE‑funded SQMS center at Fermilab to link processors across multiple cryogenic setups. Again, the theme is not a single giant box; it’s a data‑center view of quantum.
Why should people in security and cryptography care? Because the timelines line up uncomfortably well. On the defense side, NIST finalized the first three post‑quantum standards – ML‑KEM, ML‑DSA and SLH‑DSA – in August 2024, and the NSA’s CNSA 2.0 guidance calls for all U.S. national security systems to be quantum‑resistant by 2035. Those are policy clocks ticking today.
On the attack side, the theoretical resource estimates for Shor‑style cryptanalysis have been trending down. The well‑known Gidney–Ekerå estimate (2019; published 2021) put RSA‑2048 in the realm of ~20 million noisy qubits for an eight‑hour break under reasonable surface‑code assumptions. In May 2025, Craig Gidney revisited the problem and argued that with newer arithmetic and error‑correction tricks, fewer than a million noisy qubits could suffice for a sub‑week break – same physical assumptions, longer runtime, but a dramatically lower qubit count. These are not realizable machines you can rent today; they’re physics‑budget estimates. But they’re moving toward the regime IBM and Cisco are now publicly planning to populate with real hardware.
That doesn’t mean IBM and Cisco just promised “Q‑day.” The companies are careful to say many of the technologies they need, especially high‑efficiency microwave‑to‑optical transducers, are still research‑grade, and the timelines are “subject to change.” It’s hard engineering. The decoder stacks for error correction must run in tight real‑time loops; the network has to deliver entanglement where and when the algorithm needs it; the compilers must decide what to keep local and what to teleport, without blowing the error budget. And once you cross data‑center boundaries, sovereignty and governance questions arrive quickly: who operates the backbone, how export controls apply to cross‑border entanglement, and what “zero trust” even means when part of your computation is being shared as an entangled state.
Still, I’m bullish on this one – precisely because it’s not a lone‑wolf moonshot. It’s a division of labor between a company that has earned credibility for shipping to a roadmap and a company whose core competence is turning complicated networks into reliable infrastructure. The most plausible way to get to cryptographically‑relevant quantum computers (CRQCs) is not a single monolith that appears overnight; it’s a messy, incremental march toward error‑corrected modules that can be composed, scheduled, and, eventually, networked. This is what that march looks like when it grows up.
For security leaders, the practical read is simple. Assume the early‑2030s will feature networked fault‑tolerant systems in the cloud, even if they’re rare and expensive. Assume well‑funded actors will try to bend such infrastructure toward cryptanalysis as algorithms and implementations improve. And act as if “harvest‑now, decrypt‑later” adversaries are already collecting the traffic that will still matter a decade from now—because many are. The good news is that the countermeasures are no longer theoretical: NIST’s PQC standards are final, commercial stacks are arriving, and the 2035 policy horizon is public. Use it.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.