Table of Contents
(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)
Introduction
Google is a frontrunner in the quest to build practical quantum computers. The company made headlines in 2019 by achieving quantum supremacy – using its 53-qubit Sycamore processor to perform in about 200 seconds a task that was estimated to require 10,000 years on a top supercomputer. This dramatic demonstration marked a milestone in computing and signaled Google’s emergence as a leader in quantum hardware.
Since then, Google has set an ambitious target: to build a “useful, error-corrected quantum computer” by 2029. Announced by CEO Sundar Pichai in 2021, this goal defines Google’s roadmap for the decade. In essence, Google is striving to create a fault-tolerant quantum machine within ten years – a feat likely requiring on the order of one million physical qubits to yield enough stable logical qubits. Achieving this will demand major advances in qubit quality, error correction, and scaling technologies.
Google’s approach is characterized by bold leaps and deep research. Rather than publishing a detailed year-by-year timeline, the company communicates broad milestones and focuses on solving core technical challenges behind the scenes. Over the past few years, Google has steadily progressed on crucial fronts: improving qubit fidelities, demonstrating quantum error correction in larger qubit arrays, and increasing the qubit counts on its chips. Google has also invested heavily in infrastructure – for example, it opened a Quantum AI Campus in Santa Barbara with a dedicated quantum data center and in-house chip fabrication facilities to accelerate development
Milestones & Roadmap
Google famously achieved quantum supremacy in 2019 with the 53-qubit Sycamore processor, performing a random circuit sampling task in 200 seconds that they estimated would take a classical supercomputer 10,000 years (IBM contested this, saying 2.5 days on classical is possible, but nonetheless it was a clear quantum advantage demonstration). After that, Google set its sights on building a “useful, error-corrected quantum computer” by 2029. This goal, announced by CEO Sundar Pichai at Google I/O 2021, effectively put a stake in the ground for achieving a fault-tolerant machine within the decade. Translating that, Google’s team has indicated this likely means on the order of 1 million physical qubits to get enough logical qubits for useful tasks. Their approach to reach that is multi-fold: improve physical qubit quality, implement quantum error correction, and scale up via larger and possibly modular chips.
By 2022, Google had demonstrated a primitive logical qubit using the surface code – specifically, they showed that increasing the code distance (adding more physical qubits to the logical qubit) reduced the error rate, a key hallmark of a functioning error correction system. This was referred to as achieving “Milestone 2” (implicitly, Milestone 1 was achieving an error rate below the threshold on individual qubits/gates, which they had done). In early 2023, Google published results where a 72-qubit device successfully generated a logical qubit with error suppression better than any single physical qubit, proving quantum error correction can work in practice as qubit numbers grow.
In late 2024, Google made a splash by unveiling “Willow”, a new superconducting chip with 105 qubits that set a fresh quantum performance record. Willow performed a random circuit sampling (RCS) benchmark in under 5 minutes – a computation they estimated would take the largest classical supercomputer 1025 years (ten septillion years) to simulate. In other words, Google extended the quantum supremacy frontier by using roughly double the qubits and more circuit layers than the 2019 experiment, widening the gap with classical. This achievement, announced in Nov 2024 and accompanied by a Google blog and a paper, shows that Google is not just resting on its 2019 laurels – they are rapidly improving qubit count and quality in tandem. Willow also incorporated advances in error mitigation and chip design that likely came from the past few years of research (for example, Google has been exploring 3D packaging to reduce crosstalk, and new materials to extend coherence).
Looking forward, Google’s roadmap implies scaling from 100 qubits to 1,000,000 qubits by 2029. Of course, they won’t do that in one big jump – we can expect intermediate steps: perhaps a 1,000+ qubit chip by the mid-2020s, then 10,000+, possibly by tiling multiple chips together. In fact, Hartmut Neven (who leads Google Quantum AI) and colleagues have discussed modular approaches like fabricating multiple layers of qubits and using flip-chip bonding or integrated photonics for communication. Google’s team mentioned the concept of a “quantum transistor”, meaning 1 logical qubit built out of 1,000 physical qubits as a unit building block. Achieving that would then allow scaling to millions of physical qubits by replicating that block. Essentially, Google envisions a vast 2D array of error-corrected logical qubits – possibly something like a patchwork of many 1,000-qubit modules each acting as one logical qubit. This vision is backed by Google’s heavy investment in in-house fabrication: they opened a Quantum AI Campus in Santa Barbara with their own cleanroom fabs and cryo test facilities, spending billions to develop the technology.
Focus on Fault Tolerance
Google’s explicit goal is an error-corrected quantum computer by 2029. In practical terms, that means they want to demonstrate a complete quantum system where logical qubits can run deep algorithms without crashing from errors. They are not as public about exact logical qubit counts in 2029, but given the million physical qubit ambition, they might be targeting on the order of ~1,000 logical qubits (assuming ~1,000 physical per logical for surface code). That would be enough logical qubits to do very impactful things, including breaking some cryptography – we’ll discuss that below. In the nearer term, Google is ticking off QEC milestones: after achieving an improving logical qubit in 2023, the next milestones likely include demonstrating logical operations between two logical qubits (entangling logical qubits, logical gate fidelity, etc.), and eventually a small network of logical qubits performing an algorithm. They haven’t publicly stated these, but one can infer them. Notably, in 2025 Google published a preprint by Craig Gidney (one of their researchers) that drastically reduced the resource estimates for breaking RSA-2048: using some novel techniques, <1 million qubits might suffice to factor 2048-bit RSA in under a week. This was a theoretical breakthrough (I’ll cite it more later), but it aligns with Google’s hardware trajectory – if they have ~106 qubits by 2029 and algorithms continue to improve, Google might have a machine capable of CRQC around that time or shortly after.
CRQC Implications
Google’s roadmap arguably has the most direct CRQC implications simply because of the scale. A million physical qubits with high fidelity could produce on the order of a few hundred to a thousand logical qubits (depending on error rates and code efficiency). According to Gidney’s 2025 result, around 20 million physical qubits were estimated in 2019 for 8-hour factoring, but new optimizations bring that down to <1 million qubits if you’re willing to run for a longer time (like a week). In terms of logical qubits, that scheme might use ~1,400 or similar. Thus, if Google indeed realizes ~1,000 logical qubits by 2029, they’re in the ballpark of the capability needed to threaten 2048-bit RSA. Even if those logical qubits initially have limited gate depth or speed, it’s conceivable that within a couple years of 2029, improvements could enable a full factoring. Sundar Pichai’s comment that quantum computing’s real-world impact is 5-10 years away (as of 2021) underscores that they see the latter 2020s as the transition from pure research to practical utility. Security professionals are certainly watching Google’s progress – the joke in the community is that Google’s 2029 goal is the de-facto countdown for when RSA-2048 might become factorable. In any case, Google’s roadmap strongly validates the urgency of deploying post-quantum cryptography now, not later, because they are aiming right at the scales needed for CRQC.
Modality & Strengths/Trade-offs
Like IBM, Google uses superconducting transmon qubits (planar transmons in a 2D grid). Strengths of this approach include fast gate speeds, relatively straightforward nanofabrication, and the ability to leverage microwave control technology (which is well-developed from radio/RF engineering). Google has been a leader in improving two-qubit gate fidelity (their Sycamore and subsequent devices achieved ~99.7% fidelity two-qubit gates in the supremacy experiments, and they continue to refine their electronics and calibrations).
Another strength is Google’s strong theoretical and software team – they co-designed many of the quantum algorithms and error correction strategies they use, such as their surface code implementation and lattice surgery techniques. This vertical integration (from hardware to algorithms) allows them to optimize the entire system.
The trade-offs are similar to IBM’s: superconducting qubits require pristine fabrication and extreme cooling, and as qubit counts increase, control wiring and crosstalk become issues. Google has tackled some of this by developing multi-layer chips: for instance, they’ve experimented with qubits on one layer and readout resonators on another to reduce interference. There’s also an indication Google might explore 3D integration where qubits are distributed across stacked chips – which could help reach the million count (a single chip with a million on one plane is unrealistic; more likely they’d have a modular approach with maybe 1000-qubit tiles). Another trade-off is that Google’s approach to scaling is quite hardware-intensive – building a million qubit machine means fabulously complex engineering (though their partnership with firms like NASA and national labs, and the blank check from Alphabet, helps). It’s worth noting that Google’s modality choice is not locked to only transmons; they have some research in flux qubits and other superconducting variants, but transmons remain the workhorse. The surface code they use demands a 2D grid with nearest-neighbor coupling, which transmons realize naturally.
Track Record
Google’s track record is strong in terms of research breakthroughs. They were first (with partners) to demonstrate quantum supremacy in 2019. They have published key papers on quantum error correction (in 2021 and 2023) and on quantum chemistry, etc. Unlike IBM, Google has not made their devices broadly available to the public or cloud – they focus on internal milestones and collaborations. This means fewer public performance metrics (like quantum volume) from Google, but the flipside is whenever Google announces something, it’s usually peer-reviewed and significant. For instance, the Willow chip result in 2024 was announced via a research paper and independently discussed by experts. Google also invested in infrastructure: their Santa Barbara Quantum AI campus has in-house fabrication, which came online around 2020-2021. By 2022 they reportedly had production of new qubits in that fab and were able to experiment rapidly with new designs.
One interesting fact: Google in 2023 demonstrated “quantum advantage in error correction” – meaning as they increased qubits in their error-correcting code from 17 to 49, the logical qubit error went down. This was a key proof-of-concept that their approach to scaling will eventually pay off, and it came roughly on schedule with what their roadmap (outlined in 2020) predicted. Also, Google has shown a willingness to trade speed for fidelity when needed (for example, they introduced mid-circuit measurement in some experiments to do teleportation and other tricks, even though that can slow things down, because it improves reliability). Overall, Google’s track record is one of scientific firsts more than delivering productized systems, which is expected given they are research-driven.
Challenges
Google’s challenges on the road to a million qubits are monumental. They will need to solve scaling issues that no one has solved yet, such as: how do you input/output signals to a million qubits in a fridge (even with multiplexing, the number of control lines and amount of heat load is daunting)? How do you fabricate chips with that many qubits without yield issues or variability? It might require error-tolerant design at the hardware level (allowing some bad qubits on a chip and routing around them, akin to bad sectors on a hard drive). Google is likely exploring ways to have qubits communicate across chips – perhaps using microwave-to-optical converters and fiber to link cryostats, or using superconducting communication resonators between dies. These are all active research areas.
Another challenge is purely algorithmic: managing a million-qubit control system and scheduling error correction cycles, etc., is akin to an operating system problem that has never been attempted. Google will need to co-design a lot of classical control hardware (they might use FPGAs or custom ASICs to do fast feedback for error correction, similar to IBM’s approach). On the human side, Google has to maintain a large interdisciplinary team; competition for quantum talent is fierce. They’ve lost a few high-profile researchers over time to startups or academia, but so far remain a top destination for quantum engineers.
One more external challenge: Quantum skepticism and hype management – as roadmaps promise big things, there’s pressure. Google got some flak in 2019 from competitors (like IBM) about the term “supremacy,” and more recently some press have become skeptical of any claims. Google will need to keep providing hard evidence, as they’ve done, to cut through hype and show real progress. Given their track record, they likely will.