Quantum Computing Companies

Rigetti

(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)

Introduction

Rigetti Computing is a full-stack quantum hardware company specializing in superconducting qubit processors. Rigetti’s qubits are implemented as Josephson-junction-based circuits (transmons) operated at milli-Kelvin temperatures inside dilution refrigerators. This platform offers extremely fast gate speeds (on the order of tens of nanoseconds) and leverages mature semiconductor-fabrication techniques for scalability. Rigetti’s strategy centers on scaling up superconducting qubit counts while improving fidelity and pursuing quantum error correction toward fault-tolerant systems.

Founded in 2013, the company was an early pioneer in cloud-accessible quantum computing – deploying its first devices online in 2017 – and it built an in-house fab (“Fab-1”) to manufacture its chips. Today, Rigetti is pushing an ambitious hardware roadmap combining multi-chip module integration, new fabrication methods, and international partnerships to achieve quantum processors capable of complex, real-world applications.

Milestones & Roadmap

Rigetti’s recent hardware milestones reflect a pivot toward modular scaling and higher fidelities. In late 2023, Rigetti introduced “Ankaa-2,” an 84-qubit superconducting chip made available on AWS Braket – at the time the highest qubit-count gate-model device on that cloud. By end of 2024, Rigetti aimed to deploy an upgraded Ankaa-3 system targeting 99% two-qubit gate fidelity, a significant improvement achieved by reducing error rates roughly 2× from 2023 levels. Entering 2025, the company launched Cepheus-1-36Q, a 36-qubit processor built from four interconnected 9-qubit “chiplets.” This modular QPU achieved a median two-qubit fidelity of 99.5%, marking a twofold error reduction over Ankaa-3 just six months prior. The multi-chip design (4×9 qubits) is a deliberate shift from monolithic die scaling – it improves manufacturing yield and uniformity by linking smaller, high-yield chips into a larger processor. Rigetti demonstrated coherent entanglement across chip boundaries with no performance loss, validating this tiled approach. Subodh Kulkarni (CEO) noted that after years of refining the 80+ qubit single-chip devices, the company is now “manufacturing 9-qubit chips at 99.4% two-qubit fidelity” and successfully “tiled 9-qubit chips without deterioration in performance,” setting the stage for more complex multi-chip architectures.

According to Rigetti’s roadmap, the mid-2025 36Q system is only a stepping stone. By late 2025, Rigetti plans to debut a 100+ qubit system (assembled from an expanded array of chiplets) while maintaining ~99.5% two-qubit fidelities. Beyond this, the company has disclosed plans for Lyra, a 336-qubit processor that will likewise leverage the multi-chip “chiplet” architecture. Achieving 300+ physical qubits is expected to require further tiling (e.g. combining 84-qubit units) and improved inter-chip interconnects. Rigetti’s new fabrication process called Alternating-Bias Assisted Annealing (ABAA) is a key enabler on the roadmap. Introduced in August 2024, ABAA applies targeted low-voltage bias to fine-tune each qubit’s frequency post-fabrication. This avoids more cumbersome trimming methods (like laser cuts) and yields qubits with more precise frequency targeting. The result is fewer frequency collisions and defects across large arrays, which directly supports higher two-qubit gate fidelities and scaling consistency. Rigetti is applying ABAA to its current 9-qubit and upcoming larger chips to ensure frequency uniformity as qubit counts grow.

Collaborative milestones also feature in Rigetti’s roadmap execution. Internationally, Rigetti has started deploying systems for research partners: for example, a 9-qubit Novera QPU (Rigetti’s compact, fourth-generation testbed) was installed at the Israeli Quantum Computing Center in 2024, integrated with Quantum Machines control electronics and NVIDIA classical hardware for hybrid experiments. In the UK, Rigetti supplied a 24-qubit system to the new National Quantum Computing Centre (NQCC) at Harwell, which went live in late 2024 to support British researchers. (This system is slated for an upgrade to 36 qubits under a UK government grant, as discussed later.) Rigetti also launched a Novera™ Partner Program in 2024 to cultivate an ecosystem for on-premises quantum deployments. Through this program, Rigetti sold its first academic-hosted QPU (a Novera unit) to Montana State University in 4Q 2024, and partnered with industrial suppliers like Oxford Instruments (for cryogenics) to streamline delivery of turnkey systems. These milestones indicate that, while scaling up qubit count and fidelity is the technical focus, Rigetti is simultaneously working on system integration and distribution – shipping smaller QPUs to external labs and aligning with partners – to establish a broader user base ahead of its larger-scale processors.

Focus on Fault Tolerance

Achieving fault-tolerant quantum computing is a central goal driving Rigetti’s recent R&D efforts. In the NISQ regime, Rigetti’s 1- and 2-qubit gate fidelities (now ~99.5%) are approaching the thresholds required for effective quantum error correction. The company has prioritized demonstrating real-time QEC on its hardware and refining the classical control stack to support error-corrected operations. Notably, Rigetti collaborated with UK-based QEC specialist Riverlane to integrate a low-latency error decoder into Rigetti’s control system. In 2024 this team published a paper showing real-time quantum error correction on an 84-qubit Ankaa-2 processor, using Riverlane’s decoder to detect and correct errors on the fly. This experiment – one of the first of its kind on a superconducting platform – proved that feedback and decoding can keep pace with the rapid gate cycles of Rigetti’s superconducting qubits. It’s an encouraging step toward fault tolerance, as it validated that integrating a QEC loop (measurement → decode → correction) is feasible within the coherence time constraints of a mid-sized superconducting chip.

Rigetti is now expanding on these QEC capabilities through structured programs. In April 2025, Innovate UK awarded Rigetti UK a £3.5 million grant to lead a consortium on benchmarking and enhancing quantum error correction on superconducting hardware. Under this project, Rigetti will upgrade the NQCC testbed to a 36-qubit QPU (replacing the older 24-qubit system) and deploy a new high-speed control system tightly integrated with Riverlane’s QEC software stack. The explicit aim is to tackle “critical challenges” for fault tolerance: namely, classical processing bottlenecks in decoding and feedback, and the still-high physical error rates that necessitate efficient codes. By improving throughput, latency, and decoder accuracy on a larger chip, the team expects to push the state-of-the-art in real-time QEC metrics (e.g. increasing the number of error-corrected cycles achievable within coherence limits). Rigetti is also exploring next-generation error correcting codes suitable for its architecture. Through its selection in DARPA’s Quantum Benchmarking Initiative (QBI) in 2025, Rigetti is proposing to realize a “utility-scale” fault-tolerant quantum computer by 2033 using a combination of multi-chip modules and efficient QEC codes. In particular, Rigetti and Riverlane (a partner on the DARPA project) are focusing on quantum LDPC codes – a family of sparse, high-rate error-correcting codes that promise lower overhead than the surface code. The plan involves co-designing hardware and firmware to natively support qLDPC code execution on Rigetti’s superconducting qubits, thereby reducing the number of physical qubits per logical qubit and speeding up error syndrome processing. If successful, this approach could yield a fault-tolerant architecture more qubit-efficient than the surface-code schemes pursued by some competitors.

Rigetti’s roadmap for fault tolerance is thus coming into focus: continue to raise physical qubit fidelities into the 99.9% range, incorporate fast-feedback control systems with integrated decoders, and implement a scalable QEC code (potentially qLDPC) in hardware. The near-term deliverables – such as the 36-qubit upgraded QEC testbed and the 100+ qubit multi-chip system – are geared toward building and validating logical qubits. Indeed, Kulkarni has stated that Rigetti’s ultimate goal is to “increase qubit count and significantly decrease error rates” in tandem, as both are required for quantum advantage and fault tolerance. Rigetti anticipates that by steadily halving two-qubit error rates every ~6-12 months (a pace it demonstrated from Ankaa-3 to Cepheus-1), and scaling chiplet count (from 4 to 12+ chiplets per system), it can create a path to the first logical qubits and small error-corrected circuits within the next few years. The company’s participation in government QEC programs (UK and DARPA) underscores a strategic alignment with national goals for fault-tolerant prototypes by the early 2030s. Achieving dozens of logical qubits with manageable overhead by that timeframe would position Rigetti as a serious contender in the race for commercially relevant quantum computing.

CRQC Implications

The prospect of cryptographically relevant quantum computing (CRQC) – typically defined as a quantum computer capable of breaking modern encryption like RSA-2048 via Shor’s algorithm – remains a long-term benchmark against which Rigetti’s progress can be measured. While Rigetti’s current devices are far from posing a threat to RSA, the company’s scaling trajectory and focus on error correction are directly relevant to reaching CRQC capabilities in the future. Breaking a 2048-bit RSA key is estimated to require on the order of thousands of logical qubits and billions of quantum gate operations, which translates to millions of physical qubits at today’s error rates.

However, trends in both algorithmic improvements and hardware suggest the gap could close over the next decade. Optimized quantum factoring algorithms and better error-correcting codes might reduce the logical qubit requirement (recent work has already cut estimates substantially from earlier projections). On the hardware side, Rigetti’s emphasis on modular scaling and high fidelities is laying groundwork for eventually reaching the error-corrected regime that CRQC demands. The company’s roadmap, if sustained, shows a path to a few hundred qubits with ~99.9% fidelities in the latter 2020s, at which point small logical qubit demonstrations (e.g. a distance-3 or 5 surface code) could be feasible. From there, a logical-qubit count doubling every year or two (comparable to “quantum Moore’s Law” scenarios) would be required to reach the thousands of logical qubits for CRQC by the mid-2030s. Notably, the U.S. government’s guidance (via NSA and NIST) anticipates that a CRQC capable of cracking public-key cryptography “is likely by around 2035”, and Rigetti’s involvement in programs like DARPA QBI (targeting utility-scale quantum computing by 2033) aligns with that horizon. In practice, Rigetti’s machines will not threaten RSA-2048 in the near term – they simply won’t have enough qubits or low enough error rates. Even a 1,000-qubit noisy device can only factor numbers vastly smaller than RSA-2048. But if Rigetti can achieve a fully error-corrected machine with ~100 logical qubits (which might correspond to tens of thousands of physical qubits with LDPC codes) by, say, 2030, it would mark a significant inflection point. Scaling from 100 to 1000 logical qubits could then put RSA-2048 within striking distance.

In summary, Rigetti’s own scaling projections indicate that breaking strong cryptography is far beyond its 2020s roadmap, but the building blocks for CRQC – high-fidelity operations, modular scaling to thousands of qubits, and fast feed-forward control for QEC – are exactly the areas Rigetti is targeting. The firm’s confidence in superconducting qubits as a “clear path to scaling” suggests it is positioning itself to be one of the hardware platforms that could achieve a CRQC when the time comes. Barring unforeseen breakthroughs, RSA-2048 remains safe for now; a cryptographically relevant Rigetti quantum computer is likely 10+ years away, dependent on iterative advances in qubit count (into the thousands), dramatic error rate reductions, and successful implementation of fault tolerance at scale. Until then, Rigetti’s progress will contribute to the incremental steps – e.g. demonstrating a single logical qubit, then a handful of logical qubits – that collectively mark the journey toward a CRQC-capable machine.

Modality & Strengths/Trade-offs

Rigetti has steadfastly focused on superconducting qubits as its technology modality, and this choice comes with distinct advantages and trade-offs vis-à-vis other quantum hardware approaches (ion traps, neutral atoms, photonics, etc.). The superconducting platform’s primary strengths are speed and scalability. Superconducting qubits perform gate operations many orders of magnitude faster than most alternatives – Rigetti’s two-qubit gates last about 60-80 ns, whereas a typical two-qubit gate on a trapped ion might take tens of microseconds to milliseconds. This ~1,000× speed advantage means superconducting processors can execute deeper circuits within the qubits’ coherence time. (State-of-the-art transmon qubits have coherence times on the order of 100 µs, so performing gates in <0.1 µs leaves more “headroom” for complex algorithms before decoherence, compared to slower modalities.) Superconducting qubits also benefit from well-established fabrication techniques – essentially leveraging silicon microfabrication and integrated circuit methods. Rigetti points out that its qubits are made with “well-established semiconductor design and manufacturing” and can take advantage of the CMOS industry’s decades of experience in scaling chips. In principle, superconducting qubit chips can be scaled to 2D arrays of hundreds or thousands of qubits by leveraging advanced packaging (e.g. multi-layer wiring, flip-chip bonding, and through-silicon vias) similar to classical processors. This is much like how classical multi-core CPUs are scaled, and indeed Rigetti’s multi-chip approach is an extension of that philosophy to avoid the yield issues of one huge chip. Another benefit is that superconducting qubits can interact via direct electric/magnetic coupling on-chip, enabling deterministic two-qubit gates. Unlike photonic qubits which lack natural two-qubit interactions, superconducting circuits use capacitive or tunable couplers to enact entangling gates on demand. Rigetti’s design, for instance, uses tunable coupler circuits in a square lattice topology for fast, configurable coupling between neighboring qubits.

With these strengths come some limitations. A core challenge for superconducting qubits is their relatively short coherence time (by quantum standards). Even with materials and design improvements, transmon qubits have T_1/T_2 in the tens to a few hundred microseconds range. This is far shorter than, say, the coherence of trapped-ion qubits, which can remain coherent for seconds or more. In practice, this means superconducting qubits accumulate errors faster and rely on performing operations quickly to beat the decoherence clock. Rigetti’s fast gates mitigate this but do not eliminate it; error correction will be needed to execute algorithms requiring millions of operations. Another limitation is connectivity. Superconducting qubits are typically laid out in planar lattices with nearest-neighbor connectivity (each qubit only directly interacts with its immediate neighbors). Schemes like bus resonators can extend interactions somewhat, but superconducting chips do not naturally have the all-to-all connectivity that ion traps do. By contrast, in a trapped-ion chain, any qubit can in principle interact with any other via collective motional modes, greatly simplifying certain algorithms. Rigetti’s current chips use a square grid coupling, which is efficient for local operations (e.g. surface code layouts) but may require swap networks or routing for algorithms needing distant qubit interactions. Neutral atoms share a similar advantage to ions: atoms in an array can often be reconfigured or addressed with flexible geometry (some architectures even move atoms around), affording potentially higher connectivity in 2D/3D layouts. Neutral atom qubits (Rydberg atoms) also enjoy long coherence and no fabrication defects (each atom is identical), but today their two-qubit gate fidelities (typically 95-99%) lag behind the best of ions and superconductors, and gate speeds are in the microsecond regime. Thus, neutral-atom systems promise high scalability (hundreds of qubits have been demonstrated in optical tweezers) and room-temperature operation, but need to catch up in gate reliability and classical control to be competitive for general computations.

Compared to emerging modalities like photonics, Rigetti’s superconducting approach has a different set of trade-offs. Photonic qubits (e.g. as pursued by Xanadu or PsiQuantum) excel in that photons do not decohere over long distances and can operate at ambient temperatures. Photonic systems are naturally suited for quantum communication and potentially for distributed computing (since linking distant photonic modules is easier than matter-based qubits). However, implementing two-qubit gates with photons is notoriously challenging because photons hardly interact – most photonic quantum computing schemes rely on probabilistic gates or measurement-based entanglement, which require generating and managing massive entangled resource states (optical cluster states) to do useful computing. This leads to an overhead in physical resources that is currently prohibitive for scaling. Superconducting qubits, by contrast, have direct interactions and deterministic gates, making it easier to perform many operations sequentially without needing to consume large resource states. The cost, of course, is the dilution refrigerator: superconducting processors need cryogenic cooling to ~10 mK, whereas photonic qubits (aside from their single-photon detectors or nonlinear sources) can operate at room temperature. Rigetti’s reliance on cryogenics means any large-scale quantum datacenter will require significant cryogenic engineering (though companies like Oxford Instruments are partners to address this). Other modalities like semiconductor spin qubits lie somewhere in between – spins in silicon have extremely long coherence (seconds) and leverage CMOS fab techniques like superconductors do, but spin qubits are still in the few-qubit demonstration stage and have relatively slow gates and challenging control for large arrays.

Overall, Rigetti champions superconducting qubits because they offer a compelling combination of speed, engineering maturity, and a clear scaling path. In the company’s view, superconducting technology is the “winning qubit modality” due to its fast gate speeds and compatibility with modular scaling methods. As summarized by one comparison: superconducting qubits can currently achieve the highest circuit depths and fastest operation rates, whereas trapped ions achieve the highest fidelities and full connectivity – it’s essentially a trade of speed vs. fidelity at present. Rigetti’s bet is that it can close the fidelity gap through engineering (as evidenced by reaching 99.5% two-qubit gates now) faster than other modalities can close the speed gap. Meanwhile, neutral atoms and photonics each have unique strengths (scalability and communication, respectively), but they lag in other respects and may end up complementing superconductors in hybrid systems rather than displacing them. In fact, the future might see hybrid architectures – e.g. superconducting processors networked together by photonic links, combining on-chip computational power with long-distance connectivity. Rigetti’s current approach, though, remains firmly grounded in the superconducting paradigm, focusing on leveraging its advantages (fast, dense, controllable qubits) while methodically overcoming its challenges (noise and scaling overhead).

Track Record

Rigetti’s track record in hitting roadmap targets has been mixed but improving in recent years. During its early growth (around its 2022 SPAC merger), the company set very aggressive milestones – at one point projecting a 1,000-qubit system by 2023 and a 4,000-qubit system by 2024. These timelines proved overly optimistic: by 2023, Rigetti had not yet built those large devices and had to push back the roadmap, resetting the 1,000-qubit goal to late 2025 and the 4,000-qubit goal to 2027 or beyond. Contributing factors included technical hurdles, higher-than-expected development costs, and supply chain delays during 2022-23. The company also underwent leadership change in 2022 (founder Chad Rigetti stepped down, Dr. Subodh Kulkarni took over as CEO), which refocused priorities toward achievable near-term goals like improving fidelity. Since then, Rigetti has shown a pattern of delivering on revised milestones: it successfully launched the 84-qubit Ankaa chip (albeit about a year later than originally forecast) and met the targeted ~99% fidelity on that device by end of 2024. Likewise, the 36-qubit multi-chip Cepheus-1 system was rolled out by mid-2025 as planned, and it hit the promised 99.5% two-qubit fidelity mark. By Q2 2025, Rigetti was able to claim it had built “the industry’s largest multi-chip quantum computer” (Cepheus-1-36Q) and that it remained “on track” for a >100-qubit machine by year’s end. This suggests a growing maturity in aligning technical progress with roadmap timelines, at least on the scale of tens of qubits.

On the user-accessible deployments front, Rigetti has been a pioneer. It was among the first companies to put a gate-model quantum processor on the cloud (its 8-qubit and 19-qubit chips were available via its own Quantum Cloud Services as early as 2017). In 2020, Rigetti’s systems were offered through Amazon Braket at that service’s launch, initially with a 30+ qubit Aspen chip. By 2022, Rigetti had deployed an 80-qubit device (Aspen-M1, a two-chip 40+40-qubit system) to Amazon Braket. That machine encountered some performance issues and was later retired in favor of Rigetti’s next generation. In August 2024, AWS added Rigetti’s 84-qubit Ankaa-2 processor to Braket, making it the highest qubit-count gate-based QPU on the platform at that time. Unlike some cloud QPUs with limited availability, Ankaa-2 was available to run user circuits throughout the day, reflecting a robust deployment. Rigetti also provides access to its latest systems via its own QCS cloud and has announced integration with Microsoft Azure Quantum (the 36-qubit Cepheus system is slated to be accessible to Azure users). In addition to cloud access, Rigetti has delivered on-premise systems: a notable example is the fully operational 24-qubit system it installed at the UK’s NQCC in 2023, as part of a £10 million consortium project. The Novera 9-qubit systems introduced in 2023 have been sold or placed in multiple research settings (Israel, UK, universities), demonstrating Rigetti’s ability to package its technology into smaller turnkey units for external use. This broad presence – on AWS, in national labs, and shipped to partners – shows a solid track record of making its quantum computers available beyond its own walls, an important credibility factor in the industry.

It’s worth noting that Rigetti’s journey has included challenges typical for a startup in an expensive deep-tech field. The company has incurred substantial losses (for example, a $39.7 M net loss in Q2 2025 alone) and had to raise capital (approximately $350 M equity raise in 2025) to extend its runway. At one point in mid-2023 its stock was at risk of Nasdaq delisting, though the infusion of cash and technical progress have since stabilized the situation. Despite these hurdles, Rigetti has generally fulfilled the core technical promises of its recent roadmaps (fidelity improvements, multi-chip demonstration, etc.), albeit on adjusted timelines. The company’s credibility is further bolstered by external validation in the form of government contracts and partnerships (e.g. DARPA’s selection, the InnovateUK award, integration into AWS and Azure ecosystems). Users today can program Rigetti processors through the cloud with standard toolchains (Rigetti’s Quil compiler or via Braket’s SDK), and the performance (errors, speeds) is transparently documented – Rigetti routinely publishes metrics and even research papers on its systems, which is valued by the expert community. In summary, while Rigetti did overpromise in early projections, it has since built a track record of steady iterative achievement: larger and better QPUs roughly year-over-year, accessible to end users and researchers, and tangible progress toward the lofty goal of scalable quantum computing.

Challenges

Scaling a quantum computer is often said to be an exponential challenge, and Rigetti faces the full spectrum of technical and practical challenges on the road to its next milestones. A foremost challenge is maintaining qubit coherence and fidelity as the system size grows. Superconducting qubits are sensitive to materials defects, crosstalk, and external noise; as Rigetti increases qubit count, it must ensure that adding more qubits (and more control wiring, etc.) does not introduce proportionally more error. Rigetti’s recent introduction of ABAA frequency-targeting is one response to the frequency crowding issue that emerged in larger chips – by precisely tuning each qubit frequency post-fab, they reduce spectral collisions and two-qubit gate errors. But further materials engineering is needed (e.g. improving surface interfaces, reducing two-level system defects) to push coherence times higher into the 0.1-1 ms range, which would ease error correction overhead. Even with high-coherence qubits, control electronics and cryogenics present scaling bottlenecks. Every additional qubit typically requires microwave control lines and readout lines running into the dilution refrigerator. Today’s 50-100 qubit systems already use hundreds of coaxial cables, contributing heat load and complexity. Going to 1000+ qubits will necessitate new approaches like multiplexed control lines, in-fridge RF routing, or cryogenic control chips. Rigetti has begun addressing this by using advanced packaging (vertical interconnects, multi-layer signal routing) in its multi-chip modules, and by partnering with cryogenic specialists (Oxford Instruments’ modular fridge systems) to handle larger installations. Nonetheless, thermal management and wiring density remain key challenges – packing more qubits into a fridge can increase heating (from control signals) and degrade qubit performance if not carefully managed.

Another major challenge is the classical-quantum integration for QEC. As noted, performing real-time quantum error correction requires extremely low-latency readout, feedforward, and decode cycles. Rigetti’s UK project explicitly highlights “processing bottlenecks in classical control systems and their integration with quantum error decoding” as a critical issue to overcome. This means developing fast classical FPGA/ASIC controllers that can process qubit measurement data and send back corrections within microseconds. Rigetti’s current control system (designed with Riverlane for QEC) is one attempt, but scaling that to hundreds of qubits and more complex codes will be an ongoing engineering battle – essentially co-designing a classical compute cluster that sits next to the quantum chip, crunching syndrome data in real-time. Ensuring that this classical overhead does not become the gating factor is non-trivial.

Manufacturing yield and uniformity are persistent concerns as well. Rigetti’s choice to go modular (using 9-qubit chips as the building block) was driven in part by yield: smaller chips are easier to fabricate with all qubits functional and high-performing, whereas a monolithic 80-qubit chip might have many irregular qubits. While this mitigates some fabrication issues, it introduces new ones – for example, reliably bonding and aligning multiple chips, and calibrating inter-chip couplers. The company did demonstrate a successful 4-chip module, but scaling to larger modules (e.g. a 12-chip 108-qubit module for the 100+ qubit goal) will stretch the packaging precision and calibration algorithms further. Rigetti’s prior delay of the 336-qubit “Lyra” (originally hoped for 2023) underscores how challenging multi-chip integration is; Lyra is now planned after the 100+ qubit interim step. Each added chiplet and coupling interface is a potential point of failure or performance drop, so maintaining system stability and reproducibility is a challenge as complexity grows.

On the business side, Rigetti faces the challenge of sustaining R&D-intensive work in a nascent market. The company’s quarterly revenues (on the order of ~$2-3 M) are small relative to its R&D expenditure, meaning it relies on investor capital, government grants, and strategic partnerships to fund operations. The quantum computing industry is still in an early phase where practical commercial applications are limited, so Rigetti (like its peers) must carefully balance pushing technology forward with demonstrating interim value. There is pressure to achieve “quantum advantage” use-cases to justify the costs – Rigetti often cites hybrid algorithms in areas like chemistry, machine learning, or optimization as near-term targets, but convincing customers of value will depend on improving performance and software tooling. In this regard, Rigetti competes with much larger players (IBM, Google, etc.) and other modalities (IonQ’s trapped ions, for instance, have shown high fidelity albeit at smaller scale). Keeping talent and maintaining investor confidence are perennial challenges in such a competitive, long-horizon field. The company’s substantial cash raise in 2025 (boosting its cash to over $500 M) gives it a runway to execute its roadmap, but it must hit the technical milestones to continue to justify funding.

In summary, Rigetti’s path to scalable, fault-tolerant quantum computing involves tackling noise, scale, and cost challenges simultaneously. Technically, it needs to continually improve qubit quality (longer coherence, lower gate error), engineer robust larger systems (multi-chip integration, better cryo-control infrastructure), and implement error correction with fast classical pipelines. The company appears acutely aware of these hurdles – its recent initiatives (like ABAA for fabrication, fast feedback for QEC, modular cryo systems) directly target known pain points. As Rigetti executives have stated, reaching the end goal requires “hundreds or even thousands of highly accurate qubits,” and doing so will demand innovation to “maintain coherence and minimize external noise” at scale. The coming years will test Rigetti’s solutions to these challenges. If it can overcome them step by step, Rigetti stands to move from today’s proof-of-concept quantum machines toward the commercially viable, fault-tolerant quantum computers that define the CRQC era. Each incremental victory – a slightly bigger chip, a slightly lower error rate, a stable multi-chip module, a first logical qubit – will be hard-won, but together they chart the only viable course through the formidable challenges of this field.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap