Quantum Computing Companies

Intel

(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)

Introduction

Intel’s approach to quantum computing centers on leveraging its semiconductor engineering prowess to develop a scalable hardware platform. Unlike competitors focusing on superconducting circuits or trapped ions, Intel has invested heavily in silicon spin qubits – quantum bits encoded in the spin of single electrons in silicon quantum dots.

Intel began exploring quantum hardware in earnest through a 2015 partnership with QuTech in the Netherlands, and initially pursued superconducting qubits as well. In 2017, Intel delivered a 17-qubit superconducting test chip to QuTech, demonstrating improved packaging and interconnects for qubit stability. This was followed in 2018 by a 49-qubit superconducting chip codenamed “Tangle Lake,” marking Intel’s entry into the race for more qubits. However, Intel soon pivoted emphasis toward spin qubits, which are about 50×50 nm in size, roughly the size of a transistor, making them up to one million times smaller than other qubit types like superconducting loops. This small size and compatibility with CMOS fabrication give spin qubits a potential scaling advantage.

Intel’s long-term vision is to apply decades of mass-production experience (lithography, materials, and transistor design) to quantum processors, enabling eventual integration of millions of qubits on a chip. The company openly acknowledges that achieving a fault-tolerant quantum computer is a long-term challenge with “fundamental questions and challenges” remaining. To tackle this, Intel’s quantum research spans the full stack – from qubit devices and cryogenic control electronics to software and algorithms – but with an internally developed hardware focus at its core.

Key components of Intel’s strategy include its silicon spin qubit chips (e.g. the 12-qubit Tunnel Falls device), cryogenic control ASICs (e.g. the Horse Ridge series and new Pando Tree chip), and advanced packaging – all aimed at a scalable, integrable quantum system.

Milestones & Roadmap

Intel’s quantum hardware timeline features steady, methodical progress rather than splashy qubit-count leaps. Early milestones came via its QuTech alliance: the 17-qubit and 49-qubit superconducting chips (in 2017 and 2018 respectively) demonstrated Intel’s ability to fabricate and package quantum processors on 300 mm wafers. By 2018, Intel had also developed its first silicon spin qubit fabrication flow, leveraging standard transistor process techniques to make quantum-dot devices on a scale smaller than a pencil’s eraser. This laid the groundwork for Intel’s shift toward spin qubits. In late 2019, Intel unveiled Horse Ridge, a cryogenic control chip named after one of the coldest locations in Oregon. Horse Ridge was implemented in Intel’s 22 nm CMOS technology and could operate at 4 K, generating microwave pulses to control multiple qubits from inside the cryostat. This was a key milestone for tackling the “wiring bottleneck,” as it moved bulky room-temperature control electronics down to the cryogenic environment. By December 2020, Horse Ridge II was introduced with expanded capabilities – including the ability to drive up to 16 spin qubits via on-chip direct digital synthesis and to perform qubit state readout, using 22 high-speed DAC channels. In parallel, Intel and QuTech demonstrated “hot” qubit operation at ~1.1 K in a 2020 Nature paper, hinting that silicon spin qubits might relax extreme refrigeration requirements in the future.

The spin qubit processor development reached a new level in 2023. In June, Intel announced Tunnel Falls, a 12-qubit silicon spin qubit chip fabricated on 300 mm wafers at Intel’s D1 facility. Tunnel Falls is Intel’s most advanced spin-qubit chip to date, featuring a linear array of 12 quantum-dot qubits with gate layouts compatible with industry fabrication rules. Notably, it was produced with deep ultraviolet and EUV lithography, achieving ~95% yield across the wafer – over 24,000 multi-qubit devices per wafer – with uniform threshold voltages similar to standard CMOS processes. By “reusing” its transistor process technology, Intel could attain high device uniformity and reproducibility, crucial for scaling up qubit counts. Tunnel Falls was not a commercial product but rather a research chip distributed to universities and national labs via a partnership with the U.S. LPS Qubit Collaboratory (LQC). This “quantum sandbox” approach helps academic partners experiment with multi-qubit operations on a reliable silicon platform, feeding back learning to Intel and building a talent pipeline for the technology. Intel has already indicated that a next-generation spin qubit chip – building on Tunnel Falls – is in development and expected to be released in 2024. While details are under wraps, Intel’s focus is on improving qubit count and performance “quality” rather than chasing sheer numbers. Intel Labs’ director Rich Uhlig hinted that “we are working on another one…I won’t say how many [qubits]. For us, it’s less about the number and more about the quality.” This underscores that the technical roadmap prioritizes qubit fidelity, uniformity, and integration for error correction, before aggressive scaling of qubit quantity.

Beyond 2024, Intel’s public communications have been conservative about specific timelines. The company has avoided committing to a target date for a large-scale quantum computer or a demonstrated logical qubit, in contrast to some competitors’ roadmaps. Gartner analysts have noted that Intel is “taking a longer view” and a more cautious approach, focusing on fundamentals rather than engaging in the qubit-count hype race. Pat Gelsinger (Intel’s CEO) has emphasized that Intel might be the only company “using the same [silicon] process and materials we’re already using” for qubits – and that if it works, “we can do this at scale.” In Intel’s view, the path to a commercially relevant quantum computer likely spans the latter half of this decade and beyond. Their internal roadmap aims to methodically increase qubit counts (dozens to hundreds, then more) in concert with improvements in coherence and control, rather than achieving a specific qubit number by a set year. Ultimately, Intel concurs with industry consensus that millions of physical qubits will be needed for fault-tolerant quantum computing that can solve real-world problems. Thus, the long-range roadmap is aligned with that scale, even if Intel refrains from public date-setting. Intel’s near-term milestones focus on “scaling quantum devices and improving performance with its next generation quantum chip,” along with adding connectivity (2D qubit arrays) and demonstrating high-fidelity 2-qubit gates on an industrial process. Each of these milestones – from multi-qubit chips to advanced cryo-controller releases – forms part of Intel’s systematic march toward a large-scale, fault-tolerant quantum system.

Focus on Fault Tolerance

From the outset, Intel’s program has framed fault tolerance as the ultimate goal, guiding its choices in qubit architecture and system design. A fault-tolerant quantum computer requires implementing quantum error correction (QEC) to overcome qubit errors (decoherence, gate errors). Intel’s strategy is to improve the “three pillars” needed for QEC: qubit quality, uniformity, and scalable control. Rather than simply multiplying noisy qubits, Intel has prioritized increasing coherence times and gate fidelities of its qubits. In silicon, isolated single-electron spins can have very long coherence. In practice, the Tunnel Falls devices use isotopically purified silicon in the quantum well to reduce magnetic noise, which has indeed “enhanced coherence times as expected” in recent tests. Intel reported single-qubit fidelities on silicon devices around 99.9% under optimized CMOS processes – a promising level on par with the best superconducting qubits. The harder challenge is two-qubit gate fidelity, which Intel is actively working to maximize on the next-gen chips. High two-qubit fidelity (99%+ range) is essential to reach the error correction threshold for codes like the surface code. Intel’s hardware is being designed to support QEC-friendly topologies: their quantum dot arrays naturally form a 2D grid of nearest-neighbor coupled qubits, where each qubit can potentially interact with four neighbors. This is exactly the connectivity needed for the surface code (a leading QEC scheme), in which each data qubit is typically surrounded by four measurement (ancilla) qubits. Jim Clarke, Intel’s quantum hardware director, noted that “Intel’s [qubit] topology is focused on…each qubit connected to four others, which is key to error correction.” This indicates that as Intel scales up to 2D qubit arrays, they have the surface code or similar QEC architectures in mind for achieving logical qubits.

Another pillar of fault tolerance is being able to measure and control thousands or millions of qubits reliably and rapidly. Intel has tackled this through its cryogenic control electronics and interconnect innovations. The Horse Ridge II control SoC (at 4 K) and the more recent Pando Tree chip (at ~10 mK) form a two-tier cryo-control system that dramatically reduces the wiring complexity in a large quantum setup. Horse Ridge II can generate control voltages and microwave pulses for groups of qubits and communicates through just a few lines down to Pando Tree. Pando Tree then acts as a demultiplexer at the qubit plane, distributing control signals to up to 64 qubits from a single input line. This approach means the number of cables between temperature stages grows logarithmically, not linearly, with qubit count. For example, controlling 1000 qubits might require only ~10 input lines from 4 K to mK (rather than 1000 separate coax lines), and a million-qubit system might need on the order of 20 lines. Such drastic cable count reduction is crucial for fault tolerance, because a fault-tolerant machine will require millions of physical qubits – impossible to wire up one-by-one. Intel’s cryo-engineering effectively “uncorks” this bottleneck, as closer integration of control electronics preserves signal fidelity and reduces heat load when scaling up. Additionally, by operating control chips at 4 K and 0.01 K, Intel opens the possibility of faster feedback for error correction cycles. The vision is to eventually have qubit drive, readout, and even some classical logic for decoding error syndromes all co-located in the cryogenic environment for speed. Intel has stated that “quantum controls [are] an essential piece of the puzzle…to develop a large-scale commercial quantum system”, underlining that solving control and readout challenges is as important as qubit fabrication for fault tolerance.

Intel’s research teams also focus on quantum error correction protocols and have explored designs for shuttling or coupling spins to enable multi-qubit interactions across the chip. While no large-scale QEC demonstration has been published by Intel yet, they are investing in the groundwork: high-volume automated testing of qubits to gather error statistics, and software like Intel’s Qubit SDK which can simulate error-correcting algorithms on architectures similar to their hardware. The company has openly stated “we are investing in quantum error correction and controls” in tandem. One notable achievement is the development of a 300 mm wafer-scale cryogenic prober (in collaboration with Bluefors) that can cool full wafers to ~1 K and automatically test hundreds of qubit devices in hours. This allows Intel to statistically measure qubit uniformity, coherence and error rates across entire batches, feeding into process tweaks that improve reproducibility. Such engineering discipline, reminiscent of classical yield ramping, is rarely applied in quantum research and gives Intel a trove of data for understanding error sources. Indeed, Intel researchers reported in 2024 that single-electron spin qubits across a wafer showed 99.8-100% yield per quantum dot and 96% yield for full 12-dot devices, with high uniformity in threshold voltages. This consistency bodes well for error correction, since uniform qubits simplify calibration and ensure no “bad qubits” undermine a logical qubit.

In summary, Intel’s fault-tolerance focus is evident in: (1) their drive toward high-fidelity, long-coherence qubits (through materials and design), (2) an architecture tailored for QEC (dense 2D connectivity and planning for millions of qubits), and (3) innovations in cryo-control and testing to support massive scale. By improving “qubit density, reproducibility of uniform qubits, and measurement statistics from high-volume testing,” Intel is chipping away at the fundamental barriers to a fault-tolerant quantum computer. Each incremental advance – a more stable qubit, a faster feedback loop, a reduction in wiring – is bringing the error-corrected, large-scale quantum computer closer to reality, even if the finish line is still years out of reach.

CRQC Implications

A major motivation for fault-tolerant quantum computing is the ability to execute Shor’s algorithm and other attacks on classical cryptography – so-called cryptographically relevant quantum computing (CRQC). CRQC typically implies a machine capable of running, for example, factoring algorithms on RSA-2048 in a reasonable time, which is estimated to require on the order of thousands of logical qubits (and millions of physical qubits given overhead) with sustained low error rates. Intel’s long-term roadmap does anticipate quantum processors at this immense scale (million+ qubits), but the company’s strategy and public stance suggest CRQC is still a distant goal. Intel explicitly notes that today’s quantum systems are limited to “tens or hundreds” of qubits and “to achieve quantum practicality, commercial quantum systems need to scale to over a million qubits” while overcoming qubit fragility and programmability challenges. In other words, the kind of machine needed for breaking modern cryptography lies beyond the NISQ-era devices and even beyond the first few generations of error-corrected quantum computers. Intel is aligning its R&D with that eventual scale – focusing on technologies that could enable millions of qubits (such as extremely dense qubit integration and on-chip control as discussed) – but it has not issued any claim of a specific timeframe for CRQC capability.

In fact, Intel’s cautious approach may implicitly place CRQC on a longer timeline than some competitors project. IBM, for instance, has a public target of delivering a ~200 logical qubit fault-tolerant machine by 2029 (which could potentially threaten some cryptography). Intel, meanwhile, avoids promises about years, stressing the importance of not getting caught in hype that could lead to a “quantum winter” if expectations aren’t met.

However, there are also implications that Intel’s careful, quality-first roadmap might reach CRQC later than more aggressive roadmaps from others. By focusing on solving foundational issues (coherence, interface bottlenecks, error rates) now, Intel is essentially trading short-term speed for long-term capability. In the near-to-mid term, this means Intel is less likely to produce a quantum computer that threatens encryption before the late 2020s. As of 2025, Intel’s largest chip has 12 physical qubits (no demonstrated logical qubit yet), whereas some peers have devices with 50-100+ physical qubits (though none with a true logical qubit at scale either). The question is whether Intel’s approach will mature in time to contribute to CRQC before or around the same time others reach it. On the positive side, if and when CRQC-class hardware emerges from Intel’s program, it could be extremely robust and manufacturable at scale, potentially accelerating the proliferation of such powerful quantum machines (for better or worse in terms of cryptographic security).

Modality & Strengths/Trade-offs

Intel’s choice of silicon spin qubits as its primary modality comes with distinct strengths and trade-offs. The biggest strength is synergy with existing CMOS fabrication – these qubits are essentially nanoscopic transistor-like structures (quantum dots gated on silicon), meaning Intel can build them in its standard facilities with minor tweaks. This has two profound implications: scalability and uniformity. Because each spin qubit is so small (50 nm scale) and fabricated using lithography, Intel can in principle pack orders of magnitude more qubits into a chip than modalities like superconducting circuits or ion traps, which are much larger devices. For example, IBM’s 127-qubit superconducting chip (Eagle) is physically large (centimeters of area) and pushing wiring limits, whereas a silicon chip of the same area could contain thousands of quantum dots. Intel has emphasized that its spin qubits are “up to 1 million times smaller than other qubit types,” potentially enabling dense integration. Moreover, by using 300 mm wafers and advanced process control, Intel can achieve high yield and reproducibility of qubits across the entire wafer – as demonstrated by ~95% yields on Tunnel Falls devices and extremely consistent transistor-like characteristics across thousands of qubits. This uniformity is a major advantage because a large quantum computer will require nearly identical qubits for error correction to work efficiently. Competing approaches often fabricate qubits one chip at a time with e-beam lithography or hand assembly, which can lead to device variability. Intel’s industrial approach is uniquely suited to eventually fabricating millions of qubits with minimal variation.

Another strength of silicon spin qubits is their coherence properties. In bulk silicon with isotopic purification (eliminating most noisy nuclear spins), electron spins have demonstrated very long coherence times (in the ms to seconds range with echo techniques). Even in multi-dot devices on silicon/silicon-germanium, coherence times T_2 on the order of 10-100 ms have been reported – much longer than typical superconducting qubit T_2 times (which are in the 100 µs to few ms range). Longer coherence means fewer error correction cycles and a potentially lower overhead to maintain quantum information. Intel has leveraged isotopically enriched 28Si in its quantum well layers to tap into this advantage. Additionally, silicon spin qubits may offer the ability to operate at higher temperatures than superconducting qubits. Superconducting circuits generally require ~10-20 mK operation; by contrast, experiments (including Intel/QuTech’s) have shown silicon spin qubits can function at 1.1-1.2 K when engineered appropriately. Intel’s Horse Ridge controller is specified for 4 K, and the goal is to have the qubits and control electronics meet somewhere in between – “Intel aims to have cryogenic controls and silicon spin qubits operate at the same temperature level,” potentially a few kelvin. Even a 1 K qubit reduces the refrigeration complexity significantly (no dilution refrigerator needed, a simpler pumped 4He system might suffice). This “hot qubit” direction is a long-term benefit of silicon as a material platform. Intel’s Horse Ridge and Pando Tree chips also exploit the CMOS-compatibility of silicon – they are built with the same 22 nm FinFET process and can be integrated via advanced packaging. Intel has hinted it could eventually use technologies like Foveros 3D stacking to place control chips in close proximity or even bonded to the qubit chip. Such integration of classical and quantum chips is a strength of the silicon modality; the tight integration promises reduced latencies and improved stability (fewer physical connections that can introduce noise). In short, Intel’s modality choice plays to its core strengths – manufacturing scale, precision fabrication, and co-integration of electronics – potentially enabling a uniquely manufacturable quantum computer.

Of course, there are trade-offs and challenges with silicon spin qubits. One trade-off is that silicon qubits are at an earlier stage of development compared to superconducting qubits. While superconducting qubits have been used to demonstrate dozens of two-qubit gates and small algorithms, silicon spin qubits until recently were typically manipulated in 1-4 qubit systems in academic labs. Intel’s 12-qubit Tunnel Falls chip is among the largest of its kind; operating all 12 in a coherent algorithm remains a task being explored by research partners. Thus, in the short term, Intel’s approach lags in demonstrated quantum circuit complexity. Each silicon qubit also requires extremely fine-tuned electrostatic control (adjusting multiple gate voltages to isolate single electrons, etc.), which historically has been a painstaking manual process. Intel is mitigating this with automation (the cryoprober and software tuning routines), but tuning up even a 100-qubit silicon processor for the first time will be a heavy lift. By contrast, superconducting qubits come pre-defined by circuit design and are relatively easier to calibrate in larger numbers (though crosstalk and parameter spread are issues there too).

Another challenge is two-qubit gating in silicon: spin qubits interact via exchange or mediated coupling, which can be highly sensitive to charge noise and require nanosecond timing control. Achieving 99%+ two-qubit fidelities in silicon has only recently been shown in labs on a small scale, and doing so uniformly across many qubits will be difficult. Intel acknowledges this and is focusing on process improvements to reduce charge noise and on adding micromagnets or resonators to enable more robust coupling. Readout of spin qubits is also more complex – typically done via a charge sensor (like a single-electron transistor next to the qubit) that detects the spin state. This is a slower and more involved readout scheme than the microwave resonator readout used in superconducting qubits. Intel’s Horse Ridge II included the capability to read qubit states, likely by controlling and sensing such charge sensors, but scaling up thousands of readout channels (or time-multiplexing them) is an engineering challenge to be solved. In the big picture, superconducting qubits have a head start in multi-qubit integration and algorithm demonstration, whereas spin qubits promise longer-term scalability but still have several scientific and engineering hurdles to clear before they rival superconducting platforms in performance. Intel is essentially trading short-term progress for a potentially higher ceiling.

It’s also worth noting that Intel hasn’t completely abandoned other modalities – it maintains some effort in superconducting qubits (and has delivered test chips to partners), and its researchers keep an eye on quantum photonics and other approaches via academic collaborations. But silicon spin qubits remain the centerpiece, as they allow Intel to leverage its 50+ years of transistor know-how. As one Gartner analyst observed, “Intel’s silicon spin qubit approach is uniquely differentiated and leverages Intel’s prowess in classical semiconductors”, which could position Intel to mass-produce qubit arrays when the science matures.

The trade-off, in summary, is a slower initial pace in exchange for a potentially faster ramp when the approach comes to fruition. If Intel succeeds, the reward is huge: a quantum computer design that can piggyback on existing fab infrastructure – meaning faster scaling, lower per-qubit cost, and easier integration with classical systems (for control and perhaps cryo-compute acceleration alongside qubits). The drawback is the risk that the approach might face unforeseen roadblocks (for example, subtle materials issues or noise sources in silicon that limit fidelity, or simply that competitors achieve practical quantum computers with other tech before Intel does). Thus, Intel’s modality choice embodies a classic engineering trade-off: betting on a solution that is harder upfront but more scalable later. Thus far, the bet appears justified by measurable progress: dramatic improvements in spin-qubit uniformity, successful multi-qubit chips, and novel cryo-CMOS solutions – all indications that the CMOS-compatible qubit strategy is bearing fruit, even as significant challenges remain.

Finally, Intel’s fabrication strategy deserves special mention. By using its own leading-edge fabs, Intel can incorporate advanced techniques like deep UV patterning, chemical-mechanical polishing, and transistor replacement-gate processes into qubit manufacturing. The process refinements Intel reported (such as adding a screening gate layer and optimizing thermal budgets) have significantly reduced variability and improved quantum dot stability across wafers. This level of process control is a unique strength that most quantum startups and labs cannot match. Intel effectively treats the qubit arrays like any complex VLSI circuit, applying rigorous design rules, optical proximity correction, dummy fills, etc., to ensure uniformity. The trade-off is that making design changes or iterations can be slower and costlier (full wafer runs vs. quick one-off devices in a university fab). But Intel mitigates that by powerful simulation and co-design tools, and by leveraging its classical device modeling expertise to predict low-temperature behavior (as done for Horse Ridge’s transistor re-optimization at 4 K).

All told, Intel’s modality and fabrication strategy reflect its belief that manufacturability and integration are the keys to winning the quantum race. The company is effectively transplanting Moore’s Law disciplines into the quantum realm – an approach that might appear slower at first, but if successful, could enable a quantum leap in scaling when the technology matures.

Track Record

Intel’s track record in quantum hardware development spans roughly a decade and shows a pattern of close academic collaborations, iterative prototyping, and full-stack thinking. The 2015-2020 period established Intel as a serious player: its $50 million investment in QuTech (Delft) led to a string of joint achievements, including the 17-qubit and 49-qubit superconducting chips. Those chips were important not only as devices but as demonstration of Intel’s packaging and interconnect capabilities (e.g. flip-chip bonding to allow 10-100× more signals than wirebonds on the 17-qubit chip). Intel and QuTech also co-published advances in spin qubits, such as a 2019 result showing two-qubit logic gates in silicon quantum dots and operation above 1 K. During this time, Intel grew a dedicated quantum hardware team within Intel Labs, headed by Jim Clarke, and fostered talent exchange with partners – QuTech provided quantum physics expertise while Intel contributed engineering and fab know-how. This industry-academia synergy is a recurring theme: for instance, the design of Horse Ridge had input from QuTech researchers, and QuTech’s spin-qubit experiments benefited from Intel-fabricated devices.

By 2018, Intel’s quantum efforts expanded to include partnerships in the U.S. as well. It began collaborating with the U.S. Army Research Office and Laboratory for Physical Sciences (LPS). In 2023, Intel formalized this by partnering in LPS’s Qubit Collaboratory (LQC) program to “democratize” access to silicon spin qubits. Under this program, Intel provides Tunnel Falls chips to multiple research institutions (initial recipients include LPS itself, Sandia National Labs, University of Rochester, and University of Wisconsin-Madison). This is a notable aspect of Intel’s track record – rather than keeping its quantum chips purely in-house, it is seeding the R&D community with hardware. The goal is to accelerate learning on issues like multi-qubit calibration, novel gate schemes, and algorithms on silicon qubits. Feedback from these collaborations will likely inform Intel’s next designs (for example, if university teams find certain cross-talk or gating issues, Intel can address those in the next-gen chip). It also helps train students and scientists on Intel’s platform, building a workforce pipeline, as highlighted by partners at Sandia and UW-Madison who praised the reliability and sophistication of Tunnel Falls for training new quantum engineers. This collaborative, open-model approach is somewhat unique – companies like IBM and Google allow cloud access to their quantum machines, but Intel is literally handing chips to researchers to use in their own cryostats. It speaks to Intel’s confidence in the robustness of its devices and its emphasis on ecosystem building.

On the prototype front, Intel’s releases have been consistently focused on proving new technical capabilities. The superconducting chips (17 and 49 qubits) proved baseline functionality and packaging; Horse Ridge I (2019) proved cryo-control feasibility; Horse Ridge II (2020) added on-chip DACs and digital control for scaling qubit count. In 2022, while not a “chip” per se, Intel with partners built the full 300 mm cryogenic probe station, which is a critical enabler for high-throughput testing. The results from this (published in Nature in early 2024) were themselves a milestone: they showed that Intel’s fab process could produce qubits with 99.9% single-qubit fidelities uniformly and identified key process optimizations for yield. In mid-2024, Intel debuted the Pando Tree millikelvin chip at the IEEE VLSI Symposium, showing the world’s first mK-range quantum signal router that sits right next to qubits. This was presented as a significant step towards solving the wiring scale problem, demonstrating demuxing to 64 qubit channels at mK temperatures. Each of these prototypes – whether a qubit chip or a control chip – fits into Intel’s full-stack roadmap. For instance, alongside Tunnel Falls, Intel released a Quantum SDK (software development kit) for developers to simulate quantum algorithms for its architecture. The SDK can emulate a spin-qubit processor’s characteristics, so researchers can start coding and optimizing algorithms that might one day run on Intel’s quantum hardware. Intel is thus building out the software and architecture side in parallel (e.g., error correction decoders, compilers for spin qubits) – a track record indicating they do not see quantum computing as just a collection of qubits, but as an integrated system architecture.

Intel has also engaged in public demonstrations and education. In 2020, it launched an “Intel Quantum Computing Playground” at its Oregon campus, and its researchers frequently present at conferences (e.g., IEDM, APS, IEEE) sharing progress. Intel’s “Quantum Practicality” vision is often discussed by its scientists in media: they stress being realistic about timelines and focusing on the ingredients needed for a useful quantum computer, not just one-off lab feats. This candor is part of its track record too – for example, when announcing Tunnel Falls, Intel acknowledged it is “not a commercial offering” and that many fundamental challenges remain on the road to a fault-tolerant machine. This realistic tone has earned Intel a reputation for seriousness in the quantum community.

In terms of academic output, Intel-associated researchers have co-authored numerous high-impact papers: from the demonstration of two-qubit gates in silicon (Veldhorst et al. 2015, which helped kickstart the field), to the 2020 Nature paper on “hot” qubits, to the 2024 Nature paper on cryo-probing uniform qubits, and arXiv/ACS publications detailing Tunnel Falls’ design and performance. This academic collaboration record underscores that Intel’s program is scientifically rigorous and peer-reviewed, not a black box. They also partner with other industry players when beneficial – for instance, working with equipment makers (like Bluefors for the cryo-prober, or Afore for probe tech) to push the infrastructure forward. There’s also a notable partnership with HRL Laboratories and UW-Madison announced around 2022-2023 to collaborate on silicon qubit techniques, combining HRL’s expertise in quantum dot qubits (they hold some records in Si qubit coherence) with Intel’s manufacturing and UW’s physics knowledge. All these highlight Intel’s role as a hub connecting industry, academia, and government in the quantum space.

In summary, Intel’s track record is one of measured, tangible progress: each year or two brought a new quantum chip or capability, aimed at solving a known bottleneck. The company has consistently hit the milestones it set (e.g. delivering the 12-qubit chip by 2023, delivering Horse Ridge II in 2020, etc.) and used collaborations to amplify its efforts. While it hasn’t yet claimed dramatic quantum performance milestones (like quantum supremacy experiments or cloud services), Intel has built a strong foundation and credibility in the community. The combination of prototype hardware, supportive ecosystem moves (SDK, chip distribution), and scientific publications suggests that Intel’s quantum program is progressing in line with a long-term plan, accumulating the know-how needed for the endgame of a large-scale quantum computer.

Challenges

Intel faces a spectrum of engineering and scientific challenges on the road to a large-scale, fault-tolerant quantum computer. Many of these challenges are openly acknowledged by Intel’s leaders – they often note that while important milestones have been reached, “there are still fundamental questions and challenges” to solve before a commercial quantum system is realized.

One overarching challenge is scaling from prototype devices to thousands or millions of qubits without sacrificing performance. This is not simply a manufacturing challenge (though making millions of anything is non-trivial) – it’s also about architecture and error management. Intel will need to demonstrate that its qubit fidelity can be maintained or improved as qubit count grows. Right now, single-spin qubits have high fidelity, but two-qubit gate fidelity needs to meet error correction thresholds across the entire chip. Intel’s plan to implement surface-code error correction implies needing error rates on the order of ~1e-3 or better consistently. Achieving and maintaining such rates is challenging; it means further reducing charge noise, improving materials (e.g. even higher purity silicon, better dielectrics to reduce electric field noise), and extremely precise control calibration. While Intel’s 12-qubit Tunnel Falls chip is a significant step, performing a logical qubit encoding (which might require, say, a 3 x 3 patch of 9 data qubits plus ancillas in a surface code) is a key next challenge. Intel will have to integrate a two-dimensional array of qubits (rather than just a 1D line) and demonstrate multi-qubit entangling operations in that array. This will test whether the cross-coupling and interference in a dense qubit grid can be managed – something the company is actively looking at as it “plans to…fabricate 2D arrays with increased qubit count and connectivity” going forward.

Another major challenge is in the control and readout domain. Even with Horse Ridge and Pando Tree, scaling control electronics further will be tough. Pando Tree currently demuxes to 64 qubits – to handle thousands of qubits, a hierarchy of such chips or larger fan-out ratios will be needed, and the control signals must remain fast and precise. Clock distribution and synchronization at cryogenic temperatures, signal attenuation and filtering – all these require careful engineering so that qubits can be manipulated without cross-talk or overheating the fridge. Reading out qubits, especially if using analog sensors, could create a data bottleneck: millions of physical qubits might produce far more readout data per second than today’s systems, necessitating on-the-fly processing (perhaps via cryogenic FPGAs or ASICs). Intel might need to incorporate some cryogenic classical processors to handle error syndrome extraction and feed it to error correction algorithms in real-time. This concept of a “cryo-hardware stack” (qubits + cryo-controllers + cryo-compute) is on the cutting edge and certainly a challenge Intel is expected to face. The company’s expertise in classical processors could help here, but adapting conventional logic to millikelvin environments (or alternatively shuttling data to slightly warmer stages for processing) is non-trivial.

Materials and fabrication present another set of challenges. As devices get more complex (e.g., multi-layer interconnects for 2D qubit arrays, integrated resonators or couplers), maintaining the current high yields and uniformity will be harder. Intel noted improvements by reducing thermal budget and adding a screening layer in its process; further device innovations may be needed, such as new gate stack materials to minimize charge noise or strategies to mitigate integrator defects at the semiconductor interfaces. Every added component (e.g., a microwave resonator for long-distance coupling, or a spin-orbit coupling element) could introduce new loss or noise channels. Intel’s challenge is to integrate more functionality into the qubit chip while keeping decoherence under control. Coherence itself, even with isotopic purification, can be threatened by environmental factors. For instance, cosmic ray strikes have emerged as an issue for superconducting qubits, causing bursts of correlated errors. Silicon devices might also be vulnerable to radiation or background cosmic rays generating charge traps. It’s possible Intel will need to consider radiation shielding or error mitigation techniques for such burst noise events when operating large arrays – an area just beginning to gain attention in the community.

Another challenge, often highlighted by observers, is competition and the pacing of innovation. Intel’s methodical approach means it must avoid being left behind if competitors achieve a breakthrough. For example, if another company demonstrated a 100-qubit error-corrected quantum computer in the next few years, Intel would face pressure to accelerate its efforts. Thus far, Intel seems content that no one is too far ahead on the critical path to fault tolerance – everyone has significant hurdles remaining. But the company will need to remain agile: it may have to increase qubit counts faster at some point to test scaling behavior, even if fidelity isn’t yet ideal, simply to learn what problems arise at scale. Balancing the “quality vs quantity” trade-off is a dynamic challenge; Intel has leaned toward quality, but eventually quantity matters too for demonstrating small error-corrected loops or logical qubits. Intel’s own experts like Clarke and Uhlig are surely weighing when to ramp up qubit counts – and the next-gen chip in 2024/25 will be telling in this regard (whether it’s, say, 12 improved qubits, or a larger array with similar qubit performance).

Furthermore, talent and multidisciplinary coordination are challenges for any quantum project of this scope. Intel will continue needing specialized quantum engineers, cryogenic experts, and software architects who can bridge quantum and classical domains. By spreading chips to universities and involving more external researchers, Intel helps cultivate this talent, but it’s an ongoing need. On the organizational side, integrating the quantum effort with Intel’s mainstream manufacturing is likely challenging – running delicate R&D wafers through high-volume fabs requires careful scheduling and custom modifications, which can strain fab resources. Intel has a Technology Development group (where many quantum researchers sit) that interfaces with the production lines; keeping that synergy effective as complexity grows will test management.

Finally, Intel acknowledges the need to “overcome qubit fragility and software programmability” to make quantum practical. On the software side, a challenge will be developing compilers and libraries that map algorithms onto Intel’s unique hardware efficiently. The Intel Quantum SDK is a start, but as the hardware evolves (e.g., adding new gate capabilities or error correction procedures), the software must co-evolve. Ensuring a full stack – from algorithm to compiled pulses – works harmoniously is a non-trivial system challenge (for example, deciding how to schedule operations to avoid spectator errors, how to best use the limited wiring channels from Horse Ridge to many qubits, etc.). Intel’s All-Access Quantum talks and Labs Day sessions have emphasized co-design between hardware and software, but executing that co-design effectively is an ongoing difficulty.

In summary, Intel’s path is fraught with the standard challenges of quantum computing (decoherence, error rate improvement, scaling overhead) and some unique challenges of their approach (like automating the tuning of thousands of quantum dots, and integrating cryo-CMOS control at scale). The company’s measured progress so far suggests they are systematically addressing these one by one. Yet, as they readily admit, there is a “long way to go”. Each solution (e.g., Horse Ridge solving one layer of wiring) reveals the next problem (e.g., now the 0 K wiring and on-chip fanout needs solving via Pando Tree). Reaching the holy grail of a fault-tolerant quantum computer will require continued innovation at every level.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap