Quantum Computing Companies

ORCA Computing

(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)

Introduction

ORCA Computing is a U.K.-based quantum computing company (spun out of the University of Oxford in 2019) that builds photonic quantum processors using light (single photons) traveling through optical fiber. Its mission is to make quantum computing a practical reality by delivering near-term quantum accelerators for tasks like machine learning, while concurrently developing a path toward large-scale fault-tolerant quantum computers.

ORCA’s approach centers on a modular, fiber-interconnected architecture – essentially full-stack photonic systems constructed from telecom-grade components (lasers, fibers, switches, detectors, etc.) This design aims to leverage photonics’ advantages (low noise, no need for cryogenics, natural networking via fiber) to achieve scalability and usability beyond what “cumbersome, fragile, and costly” quantum setups have offered to date.

Milestones & Roadmap

Founding and Vision (2019): ORCA Computing was founded in 2019 by Professor Ian Walmsley, Dr. Richard Murray, and Dr. Josh Nunn, building on decades of quantum optics research at Oxford. From the outset, the company’s vision was to pursue a “completely new approach” to photonic quantum computing – one that could deliver useful results in the near term and still scale toward error-corrected universal computing in the long term. Early on, ORCA secured support from the UK’s national quantum initiatives (for example, leading a £11.6 million Quantum Data Centre of the Future project) to develop its unique fiber-based, memory-enhanced photonic architecture.

First-Generation System – ORCA PT‑1 (2022): ORCA unveiled its first quantum processing system, PT‑1, as a minimum viable product in 2022. The PT‑1 was a small-scale photonic processor initially capable of handling on the order of ~8 photonic qubits. Despite the modest qubit count, it was significant as one of the world’s first rack-mounted, room-temperature quantum computers accessible outside a lab. The system integrated with standard computing environments (e.g. a Python/PyTorch software stack) and used a programmable boson sampling paradigm – essentially an optical circuit that could sample from complex probability distributions – to act as a quantum co-processor for tasks like optimization and machine learning. This strategy meant that, even without error correction, the PT‑1 could be used to accelerate certain algorithms (e.g. variational or generative models) by exploiting the natural talent of photonics for fast parallel sampling. Over 2022-2023, ORCA deployed nine PT‑1 units to external customers in academia, government, and industry – a remarkable achievement in an era when many quantum startups were still operating only in the lab. Notably, the UK’s Ministry of Defence and the Polish supercomputing center in Poznań were early users, as was the UK’s National Quantum Computing Centre (NQCC) which received a PT‑1 as a testbed system.

Scaling Up – ORCA PT‑2 (2024-2025): In late 2024, ORCA launched its second-generation machine, PT‑2, marking a significant leap in capability. The PT‑2 system expanded the photonic qubit count to about 90 qubits (e.g. via greater temporal multiplexing and more modes) and delivered higher performance for more complex workloads. The first PT‑2 began shipping to customers in early 2025. Like its predecessor, the PT‑2 is a fiber-optic, telecom-component-based quantum computer that runs at room temperature in a standard 19-inch rack. It was designed for seamless integration into data centers and HPC clusters – for example, it supports NVIDIA’s CUDA-Q platform to enable hybrid quantum-AI workflows. ORCA reported that all existing PT‑1 units in the field could be upgraded to PT‑2 specifications via module additions, reflecting the modularity of its design. This upgradability is a key part of ORCA’s roadmap: each generation (PT‑1, PT‑2, …) is not a stand-alone device but an extensible platform, so that early adopters can continually improve their systems without starting from scratch. By mid-2025, ORCA had successfully installed a PT‑2 at the NQCC – the first photonic quantum computer in the UK public sector – bringing it online in a mere 36 hours. This NQCC photonic testbed is also the first of its kind to incorporate multiple single-photon sources within one system, providing a rich environment for quantum-classical hybrid R&D.

Global Deployments and Partnerships: In parallel with its UK projects, ORCA expanded internationally. In 2024, Montana State University (MSU) became ORCA’s first U.S. customer, acquiring two PT‑1 systems for its new Applied Quantum Computing center (QCORE). MSU selected ORCA’s platform in part because of its working, modular nature – the ability to reconfigure or upgrade the system over time without needing an entirely new machine. Those two PT‑1s were installed at MSU and made fully operational within two days in August 2025. ORCA also engaged in high-profile industry collaborations. For example, in 2025 Vodafone partnered with ORCA to use a PT‑2 system for network optimization problems (like finding optimal fiber routing), demonstrating that ORCA’s quantum accelerator could solve a complex Steiner tree network design problem in minutes, whereas classical methods would take substantially longer. Likewise, ORCA worked with Rolls-Royce and others through the UK’s Quantum Technology Access Programme to explore machine learning applications in chemistry and logistics, often showing that a hybrid setup with ORCA hardware could produce new solutions that purely classical approaches missed.

Future Roadmap: ORCA’s forward-looking roadmap (assessed from interviews with the founder and other sources – no formal roadmap was published) aims to systematically converge the current photonic accelerator technology into a fully universal, fault-tolerant quantum computer. The company plans iterative hardware releases (a PT‑3 system is planned by 2026) that further increase qubit counts and capabilities. Crucially, ORCA is investing in integrated photonics and advanced switching to complement its fiber-based approach. In 2024 it acquired an integrated photonics division in Texas (from company GXC) to boost its in-house capability for on-chip optical switches and circuits. Indeed, ORCA’s Chief Science Officer has highlighted that their technical pathway is to combine time-domain multiplexing with fast spatial routing (low-loss optical switches) and even deterministic photon-atom gates, in order to “weave… successful events together into the resource states we need for fault tolerance“. This indicates that ORCA’s future machines will likely incorporate quantum memory modules, integrated single-photon sources, and optical switch networks in concert – enabling larger entangled states with less overhead. By pursuing these advances, ORCA expects to reduce the redundancy currently needed for photonic quantum computing and make truly large-scale, error-corrected systems a commercially viable prospect. In summary, ORCA’s roadmap is one of gradual evolution: deliver useful photonic processors now (and learn from real-world use), while steadily adding the ingredients (more photons, better connectivity, error-correcting protocols) required for a general-purpose quantum computer later this decade.

Focus on Fault Tolerance

Achieving fault-tolerant quantum computation – the ability to correct errors and reliably run arbitrarily long algorithms – is a core long-term goal for ORCA. The company explicitly acknowledges that fault tolerance is “crucial for large-scale general-purpose quantum computing”, even as it pragmatically balances that goal with near-term commercial uses. ORCA’s approach to fault tolerance is rooted in the measurement-based photonic architecture. Instead of executing gate operations one by one (which is difficult with photons due to their probabilistic interactions), a measurement-based scheme prepares large entangled resource states of photons ahead of time, which can then be consumed by sequential measurements to carry out computations. In this paradigm (often called one-way quantum computing or cluster-state computing), the key challenge is creating those entangled resource states at scale – especially when using only linear optical elements where two-photon gates aren’t deterministic. ORCA’s research has been heavily focused on this problem of state generation under realistic constraints like photon loss and non-determinism.

One pillar of ORCA’s fault-tolerance strategy is the development of modular resource-state generators that can produce entangled clusters of photons efficiently and on demand. In ORCA’s words, these entangled states are the “essential fuel” for a photonic quantum computer. The company has been building a suite of tools to generate such resource states across a wide range of hardware configurations. This flexibility is important because no one knows which photonic hardware (what kind of sources, memories, detectors, etc.) will ultimately prove best; ORCA wants its state-generation methods to be adaptable to different component technologies. In a January 2024 publication, ORCA scientists described new schemes for fusion-based entanglement generation in linear optics. They showed that by using multi-photon fusion measurements (a kind of advanced optical projection) along with small ancillary “seed” states, one can significantly boost the success probability of creating larger photonic GHZ or cluster states. In fact, by introducing single-photon auxiliary inputs to the fusion process, ORCA demonstrated improved success rates for photonic entangling operations that were previously very low probability. This kind of scheme optimization and use of redundancy is aimed at overcoming the inherent probabilistic nature of photonic gates. The result is a toolkit that can generate various entangled graph states (including error-correcting code states) more efficiently, and it links these generation processes to intuitive representations like ZX-diagrams to aid design. In short, ORCA is devising methods to produce the complex entangled “fuel” states for computation in a way that is less resource-intensive and more tolerant of failures.

Another critical research breakthrough for ORCA has been in coping with photon loss, which is arguably the Achilles’ heel of optical quantum computing. Photon loss (a photon getting absorbed or escaping) directly corrupts quantum information and can break entanglement, so a fault-tolerant photonic architecture must either correct for loss or prevent it beyond a certain threshold. Traditional approaches to make photonic computing loss-tolerant often involved using very large entangled states with built-in redundancy (so that if some photons are lost, the logical information survives). But generating huge resource states is itself extremely challenging. ORCA’s team showed an alternative: by slightly increasing the complexity of the measurement modules (the part of the computer that uses the resource states), one can get away with using simpler, smaller resource states that are nonetheless resistant to loss. In other words, ORCA shifted some burden from state preparation to state measurement. Their new measurement protocol acts like a more sophisticated engine that can run on “cheaper fuel”. Technically, this involved a novel linear-optical measurement scheme that can tolerate photon loss by design, unlocking a class of smaller entangled states that still support fault-tolerant computing when measured appropriately. This result, published in October 2023, suggests a “commercially viable” route to building universal photonic quantum computers: use many small high-quality resource states (easier to make) in tandem with a smart measurement strategy, rather than attempting to build one gigantic entangled state upfront. It’s a promising step because it reduces the scale of the hardest part (state generation) without compromising the error-correction capabilities of the system.

ORCA’s long-term roadmap thus envisions a photonic architecture composed of many repetitive modules: some modules function as resource state generators (producing, say, small cluster states continuously), and others as measurement units that consume those states to perform logical operations, all while an error-correcting code (with redundancy against loss and noise) ties everything together. By increasing the number of such modules and improving their efficiency, the architecture can scale out to millions of physical qubits needed for fault tolerance, all while being assembled from relatively simple, repeatable components.

Finally, ORCA is exploring ways to incorporate deterministic entangling operations to further reduce overhead. One avenue is through photon-atom interactions: ORCA’s use of rubidium atomic quantum memory (more on this in later sections) not only serves to store photons, but also offers a medium where photons could interact via the atom’s mediation. ORCA has hinted that by exploiting predictable light-matter interactions (for example, using an atomic ensemble or single atom as a mediator), they plan to implement on-demand two-qubit gates between photonic qubits. Such deterministic gates would be a game-changer for photonics, as they bypass the probabilistic limitation of linear optics. ORCA’s integration of quantum memories and Rydberg-atom-based technologies is likely aimed at this goal.

In summary, ORCA’s focus on fault tolerance is characterized by: (1) Fusion-based, multiplexed entanglement generation to create error-correcting resource states efficiently; (2) Loss-aware architecture that uses smaller resource states in conjunction with advanced measurements to handle photon loss; and (3) gradual incorporation of active, deterministic gate elements (via atoms or fast feed-forward switching) to reduce the overhead of redundancy. All of these are geared toward one thing – scalability. ORCA is effectively prototyping the pieces of a future photonic quantum computer that could have thousands or millions of qubits, by ensuring from now that those pieces (be it memory, sources, or measurement protocols) can work in large numbers and with error correction. While many breakthroughs are still needed before a full fault-tolerant machine is realized, ORCA’s progress suggests a plausible path to get there.

CRQC Implications

A Cryptographically Relevant Quantum Computer (CRQC) is generally defined as a quantum machine capable of breaking modern cryptographic systems (for instance, by running Shor’s algorithm to factor large RSA integers or solve discrete log problems). Such a feat is estimated to require on the order of thousands of logical qubits (and likely millions of physical qubits when error-correction overhead is accounted for) – well beyond the scale of today’s devices. ORCA Computing has not explicitly announced any project to crack cryptography; its public focus has been on AI/ML applications and near-term advantage. However, the question naturally arises: if ORCA’s photonic architecture were scaled to fault-tolerant universality, could it contribute to a CRQC in principle? The technical evidence suggests yes. ORCA’s approach is a universal quantum computing architecture at heart – it is not restricted to only analog tasks or sampling problems, even if those are its early use-cases. By pursuing cluster-state quantum computing with error correction, ORCA is effectively building a machine that could run any quantum algorithm (given sufficient qubits and time). This includes cryptographic algorithms like Shor’s.

Photonic platforms are widely considered viable routes to extremely large-scale quantum computers. In fact, some of the most ambitious roadmaps in the field come from photonic companies – for example, PsiQuantum (another photonics-focused firm) is targeting a million-qubit fault-tolerant optical quantum computer by 2028-2030. Photons have inherent advantages for scaling: they don’t decohere easily (a photon can travel long distances or wait in a delay line without losing its quantum state, as long as losses are minimized) and they can be communicated between modules via standard optical fiber with very low error. ORCA’s design fully leverages these traits by using fiber interconnects as the “native” way to scale up – essentially treating multiple smaller photonic processors as a distributed network that can behave like one big processor. Mark Thompson of PsiQuantum aptly noted that “optical fiber is the most efficient way to transmit information between modules, and that’s why photonic quantum computing is so compelling”. ORCA’s Chief Science Officer, Josh Nunn, echoes a similar vision: the only path to a large-scale quantum computer is a distributed approach with many networked units, and photonics is uniquely suited for that style of scaling. In ORCA’s case, the use of fiber-and-memory modules means that one can imagine chaining dozens, then hundreds, of identical photonic racks, all linked by optical connections, to realize a huge composite machine. This modular scalability is precisely what a CRQC demands – the ability to grow the number of qubits without fundamentally altering the architecture.

Another aspect connecting ORCA’s work to CRQC is its emphasis on error correction and loss tolerance (as discussed in the previous section). To factor a 2048-bit RSA number, for example, a quantum computer would likely need on the order of million or more physical qubits running error-correcting codes for many hours. That is utterly impossible without a robust fault-tolerant scheme. ORCA’s research into high-efficiency entangled state generation and photon-loss-resilient protocols is directly aimed at meeting the requirements for large error-corrected computations. Their novel measurement-based approach to tolerating photon loss shows a path to keep error rates within threshold even as photons propagate through extended networks. In principle, if ORCA’s architecture can achieve an error rate per operation below the threshold (which for photonic fusion-based schemes might be on the order of a few percent loss/error per photon), it can be scaled up arbitrarily by adding more modules and more photons, and error correction will stabilize the computation. There is nothing obviously “specialized” about ORCA’s design that would exclude running Shor’s algorithm or other cryptographically relevant routines – it is a universal gate model machine once fully realized.

One could argue that photonic quantum computers might even be advantageous for cryptographic tasks in the long run. Because they can network over distance, one could envision a CRQC comprised of photonic sub-processors spread across a data center, or even across multiple data centers, linked by optical fiber. This is a natural extension of ORCA’s current modular approach. It means that, unlike some monolithic systems that face engineering limits in size, a photonic CRQC could be distributed (much like internet infrastructure) to achieve huge scale. Moreover, ORCA’s use of multiplexing in time (storing successful photonic qubits in a memory until needed) reduces the physical resource requirements by recycling components, which could make a CRQC slightly more feasible hardware-wise. Of course, many technical hurdles remain before anyone – ORCA or otherwise – reaches the cryptography-breaking stage. The required number of photons, sources, detectors, and memory cells is orders of magnitude beyond current devices, and ensuring all those photons interfere with high fidelity and low loss is an immense challenge. However, there are no fundamental physical barriers known; it is largely an engineering and scaling problem. ORCA’s steady progress (from 8 qubits in 2022 to 90 qubits in 2025, with fault-tolerance research advancing in parallel) is a microcosm of the incremental path such a scaling effort would take. It’s noteworthy that ORCA itself, while not marketing towards cryptography, uses essentially the same ingredients that a CRQC would: single-photon sources, entangling operations, quantum memories, fast feed-forward logic, and error-correcting codes. Therefore, if ORCA succeeds in building a fault-tolerant photonic quantum computer for any application, that machine would inherently be capable of cryptographically relevant computations as well.

In summary, although ORCA Computing is currently emphasizing quantum machine learning and optimization over cryptanalysis, its photonic platform holds significant implications for cryptography. A fully realized ORCA photonic network – modular, massively multiplexed, and error-corrected – could theoretically be scaled into a CRQC, one that might decrypt today’s public-key encryption by leveraging thousands of stable logical qubits.

Modality & Strengths/Trade-offs

ORCA’s quantum computing modality is fiber-optic photonics, distinguished by the use of single photons as qubits and optical fiber as the medium for guiding and storing quantum information. In ORCA’s devices, qubits are typically encoded in dual optical modes (for instance, the presence of a photon in one fiber path vs another, known as a dual-rail encoding, or potentially polarization states of a photon). These photons are generated by sources such as faint pulsed lasers or single-photon emitters, and they propagate through a network of beam splitters, phase shifters, and interferometers to create quantum logic operations.

A unique aspect of ORCA’s architecture is its heavy reliance on time-domain multiplexing and quantum memory. Rather than sending many photons through many channels in parallel (spatial multiplexing on a chip), ORCA sends photons through fiber delay lines and switches such that the same hardware can be reused across multiple time slots. If an operation succeeds on a photon, that photon might be stored briefly in a rubidium-based memory until other operations catch up, and if an operation fails, another time-bin (another attempt) is sent through until success is achieved. In practice, ORCA uses rubidium-87 atoms in a hollow-core optical fiber as a quantum memory to catch and hold photonic qubits with low loss. Josh Nunn describes this as “catching successful attempts and routing them to the next stage of the processor” – a core capability to make nondeterministic photonic processes more scalable. This memory-based synchronization is a defining feature: it allows ORCA to multiplex in time rather than requiring a massive number of physical components in parallel. Each ORCA module can thus cycle through many trials rapidly and only retain the photons from the trials that succeeded, synchronizing them for further quantum operations.

Strengths of ORCA’s Photonic Approach: The ORCA design inherits many of photonic quantum computing’s general advantages, and adds a few of its own:

  • No Cryogenics, Room-Temperature Operation: ORCA’s hardware runs at room temperature and is “air-cooled” in standard rack units. Photons are not susceptible to thermal decoherence in the way matter qubits are, so ORCA’s qubits do not need dilution refrigerators or ultra-high vacuum chambers. This dramatically lowers the infrastructure barrier. As the CEO quipped, an ORCA photonic quantum computer “looks and behaves like a standard 19-inch rack-mounted server” rather than a sci-fi chandelier of wires. This not only reduces cost, but also simplifies integration into existing data centers and cloud environments (no special facility needed).
  • Telecom Integration and Modular Scalability: ORCA builds its machines largely from off-the-shelf telecom components – e.g. 1550 nm lasers and fibers, modulators, switches – which are mature and readily available. The use of optical fiber is especially important: fiber provides a low-loss, noise-resistant channel to connect different parts of the processor or even multiple processors. ORCA’s architecture is natively networked, meaning it can use fiber to connect modules within a machine or even between machines. This modularity is a big strength: if more qubits or capability are needed, ORCA can in principle add another fiber-linked module (containing additional photon sources or memory banks) without a total redesign. The company emphasizes this as “superior scalability” – delivering value today with upgradable systems for tomorrow. Indeed, MSU’s choice of ORCA was driven by the desire for a working system that “has all the modularity you would like”, avoiding the need to buy a new quantum computer every few years. The fiber connectivity also means ORCA could distribute quantum processing across multiple racks, akin to nodes in a network, which aligns well with data center practices (just as classical supercomputers link many nodes by high-speed interconnects).
  • High Parallelism via Multiplexing: Using time-bin multiplexing and memory, ORCA can attempt many operations in rapid succession, effectively gaining parallelism over time without duplicating physical hardware for each qubit. This “quantum multiplexing” exploits multiple degrees of freedom of photonic qubits (time, polarization, frequency) to increase throughput. For example, multiple entangled photon pairs can be generated in sequence and buffered, then fused into a larger entangled state. This significantly improves the utilization of components and can reduce the number of components needed to achieve scale. It’s a strength over purely spatial approaches where every additional qubit might require another physical copy of a component on a chip. ORCA’s ability to reuse components across many time slots is a cost-efficient path to scaling.
  • Low Noise and Long Coherence: Photons interact very weakly with their environment – they don’t easily decohere from thermal or magnetic noise. This means once a photon is prepared in a quantum state, it can maintain coherence for relatively long (the main threat is loss, not decoherence). ORCA’s memory uses atomic interactions to briefly hold photons, but those atoms (warm Rb gas) are controlled such that decoherence is minimized and storage time is sufficient for synchronization purposes (likely on the order of micro- to milliseconds). The coherence length of photons in fiber can be extremely long (kilometers of fiber correspond to microseconds of delay with minimal phase noise). This trait is crucial for building up large entangled states; photons won’t spontaneously lose their quantum information while waiting to be measured.
  • Integration with Classical and AI Workflows: ORCA has tailored its systems for hybrid quantum-classical computing, which is a practical strength. The PT series comes with a software development kit that interfaces with Python, PyTorch, and CUDA libraries. This means classical developers (especially in AI) can relatively easily incorporate ORCA’s photonic accelerator into existing machine learning pipelines. The ORCA PT‑2 is explicitly described as “built for AI and HPC environments,” supporting standard protocols and easier integration into data centers. By using common software tools and having the hardware accessible over network/cloud, ORCA lowers the barrier for users to experiment with quantum in a familiar setting. This developer-friendly approach is a strength in driving early adoption and ensuring that the quantum hardware doesn’t sit idle due to software complexity.
  • Path to Fault Tolerance: While still a work in progress, ORCA argues that its design has a “viable commercial path towards fault tolerance”. The modularity and networking mean that an error-corrected photonic quantum computer could be assembled gradually by scaling out modules. The use of small resource states and efficient fusion (from their research) means that the overhead for error correction might be kept within reasonable limits. In other words, ORCA’s architecture is architecturally ready for error correction – it was conceived from the start with error-corrected operation in mind, unlike some NISQ-era designs that might not extend naturally to fault tolerance. This is a strategic strength, as it may allow a smoother transition from today’s prototypes to tomorrow’s large-scale machines.

Despite these strengths, ORCA’s photonic modality comes with trade-offs and challenges:

  • Probabilistic Operations: In ORCA’s linear optical approach, two-qubit gates (like fusion or entanglement creation events) are not deterministic – they only succeed with a certain probability (often well below 1). This is a fundamental issue with using only beam splitters and detectors for entangling photons: you cannot guarantee an entangling gate every time. ORCA circumvents this by multiplexing, as discussed, but this means a lot of overhead in terms of repeated attempts and more complex control logic to manage those attempts. The probabilistic nature also implies that as circuits get deeper, the success probability of the entire circuit can decrease exponentially if not mitigated. ORCA’s memory helps mitigate the exponential decay by “heralding” successful gates and storing them, but it doesn’t completely remove the randomness. Competing photonic approaches try to address this by other means (e.g., deterministic sources like quantum dot emitters that produce identical photons each pulse, or using squeezed light to generate entanglement deterministically in continuous-variable encodings). ORCA’s reliance on rubidium memory is one way to handle nondeterminism, but it’s a relatively complex piece of hardware itself (mixing atomic physics with photonics). This trade-off means ORCA must maintain both a photonic platform and an atomic memory platform coherently together, which is challenging.
  • Photon Loss and Scaling Overhead: Loss is the bane of photonic quantum computing. Every fiber, every coupling, every detector has some loss. ORCA’s design – with potentially many photons stored and retrieved, traveling through meters of fiber and hitting beamsplitters – has many points where loss can occur. The company’s own assessments highlight photon loss as a “critical design challenge” that necessitates redundancy in resource states. While their protocols reduce the required size of resource states, they still need some redundancy and very high efficiency components. To scale to large systems, ORCA will need ultra-low-loss optical switches, ultra-efficient single-photon sources, and detectors with near-unity efficiency. Each percent of loss eats directly into the effective error rate. For now, ORCA’s systems are small enough to manage losses (and indeed have run 25,000 jobs uninterrupted on the NQCC testbed, showing stability), but scaling to hundreds of qubits with thousands of operations will demand a herculean improvement in total optical throughput.
  • Complexity of Quantum Memory: ORCA’s signature component is its quantum memory – typically a rubidium atomic ensemble in a fiber or cell that can absorb a photon and re-emit it on demand. This is cutting-edge technology in its own right; it requires precise control of atomic states and coupling of light and matter. The memories might need vacuum chambers or magnetic shielding, and lasers to manipulate the atoms. Maintaining these in a robust, user-friendly package is non-trivial. Additionally, memory introduces a time limitation: photons can only be stored for so long before the atomic coherence fades. If the memory lifetime is, say, a few microseconds or milliseconds, that caps how long the system can wait to synchronize events. For very large computations (which might need many time-multiplexed steps), ORCA might need a way to extend memory time or use a chain of swap operations, which could be complex. In short, the quantum memory is both a blessing (for synchronization) and a burden (for hardware complexity and coherence time).
  • Limited Qubit Interactions (so far): In ORCA’s current systems (PT‑1, PT‑2), the primary operations are generating and interfering photons (essentially beam splitter networks for boson sampling or similar). These can solve certain problems (e.g. sampling, shallow circuits), but a wider array of quantum algorithms will require more flexible interactions. Two arbitrary qubits in ORCA’s processor might not interact unless a specific photonic circuit is configured or unless one implements an effective gate via measurement. The “programming” of an ORCA machine likely involves setting optical phase shifters to configure an interferometer. This is fine for certain tasks, but it’s not as straightforward as a gate-model quantum computer where one can apply, say, a CNOT between any two qubits via microwave pulses. ORCA’s plan is to move toward a gate-based system eventually, but until deterministic gates (via photon-atom or other means) are integrated, there are some algorithmic limitations. For example, a large algorithm might require building a huge cluster state ahead of time – something ORCA is working on, but not yet realized in hardware.
  • Competition with Integrated Photonics: ORCA’s use of optical fiber and discrete components offers great flexibility and rapid development (no need for a full chip fabrication to try a new design), but it may face competition from fully integrated photonic chips in terms of compactness and mass-manufacturability. Companies like PsiQuantum and Xanadu are investing in lithographically fabricated photonic chips with thousands of components. Those approaches aim to put sources, circuits, and detectors all on chip, which could reduce losses at interfaces and allow very large parallelism (albeit at the cost of huge complexity in chip design). ORCA has taken a somewhat middle road by combining fiber tech with some integrated parts (and acquiring a photonic chip team to develop switches). A trade-off here is development speed vs. ultimate scalability: ORCA could deploy a working system sooner by using fibers and modules (which it did, delivering systems by 2022), whereas integrated photonic quantum chips are still mostly prototypes but could leap in scale once the fabrication issues are solved. ORCA will need to continuously incorporate the best of integrated components (e.g. on-chip single-photon sources from Sparrow Quantum, on-chip superconducting nanowire detectors from Pixel Photonics as per their Eurostars project) into its fiber platform to stay competitive. This hybrid approach might yield the best of both worlds, but also requires careful engineering to interface fibers with chips efficiently (alignments, coupling loss, etc., are challenges).
  • Detector Requirements: A subtle trade-off concerns single-photon detectors. High-performance single-photon detectors (like superconducting nanowire detectors) typically require cryogenic cooling. It’s possible that ORCA’s current systems use avalanche photodiodes (APDs) that can operate with Peltier cooling at modest temperatures for convenience, but those have lower efficiency and higher noise compared to SNSPDs. If ORCA wants to boost performance (especially for fault tolerance where every photon counts), it may need to use SNSPDs or other advanced detectors, which would introduce a small cryogenic component into an otherwise room-temperature system. Pixel Photonics, one of ORCA’s partners, works on integrating SNSPDs on photonic chips – which could mitigate the impact (small cryo modules could be integrated). Nonetheless, detector performance is a potential bottleneck: dead time, dark counts, and efficiency all affect the overall error rates. ORCA will have to manage these by either technological improvements or clever design (e.g. multiplexing detectors too).

In summary, ORCA’s modality of fiber-based photonic quantum computing with memory offers remarkable strengths – easy scaling via networking, use of proven telecom tech, room-temperature operation, and a current advantage in delivering practical hybrid quantum processors. But it comes with trade-offs: the need to overcome probabilistic gating with complex multiplexing, to fight photon loss tooth-and-nail, to maintain atomic memories, and to gradually bridge towards a fully universal gate set. ORCA’s strategy has been to acknowledge these trade-offs and address them head-on through research and engineering (e.g. their focus on loss tolerance and their move to acquire photonic integration capabilities). Compared to other photonic efforts, ORCA’s distinguishing strength is flexibility: time multiplexing with memory is a clever way to reduce component counts and adapt to component improvements as they come. The flip side is that ORCA must master both photonics and atomic physics in tandem. The coming years will show whether this approach can outpace or complement more deterministic photonic schemes. If ORCA succeeds, it will validate a design that truly “functions, operates and performs like no other” quantum computer – one that is modular, upgradable and avoids many of the cryogenic headaches of other modalities. The balance of strengths vs. trade-offs in ORCA’s modality will ultimately be measured by how well it scales: can they keep losses low and success probabilities high as the system grows? The evidence so far (small but real quantum advantages demonstrated in optimization tasks) is encouraging, but significant challenges remain, as discussed next.

Track Record

In just a few years, ORCA Computing has built a solid track record of delivering working quantum hardware and achieving several “firsts” in the photonic quantum computing landscape. This track record spans successful system deployments, technical demonstrations, and research contributions:

System Deployments and Customers: ORCA has the distinction of being among the first quantum startups to ship actual quantum computers to customers for on-premises use. By early 2025, the company had delivered ten of its PT-series photonic quantum machines to sites around the world. These include government and defense clients (e.g. the UK Ministry of Defence received a system, making ORCA likely the first to provide a quantum computer to a national defense department), national labs (the NQCC in the UK, as noted, where ORCA installed the first photonic testbed system), academic research centers (Montana State University’s QCORE in the US, and University of Edinburgh’s Quantum Software Lab via the NQCC testbed), and industry partnerships (such as those with Vodafone, Rolls-Royce, and others through consortia). The speed and reliability of these deployments have been highlighted: for instance, the NQCC testbed was up and running within 36 hours of delivery and the MSU installations were completed in under 2 days. This suggests a level of maturity in ORCA’s hardware and packaging – the systems are robust enough to be transported and quickly initialized, and they don’t require exotic infrastructure. ORCA’s CEO noted that the successful NQCC installation “reflects the reliability, scalability as well as the maturity of our PT Series system“. Having multiple systems in the field is also a valuable asset for ORCA’s iterative development; user feedback from these early adopters can inform improvements for next-gen systems.

Use Case Demonstrations: ORCA and its partners have already used the PT-series machines to tackle non-trivial problems, showcasing the potential for quantum advantage in specific domains. A striking example is in quantum-enhanced generative modeling for chemistry: ORCA collaborated with researchers (e.g. at pharmaceutical company and at BP plc) to use a PT‑1 device alongside classical generative models (GANs) to propose new molecular structures for drug discovery and biofuels. The result was that the hybrid quantum-classical model could identify candidate molecules that the classical model alone had never considered, effectively broadening the search space. This implies that ORCA’s photonic sampler provided a more diverse probability distribution for the generative model, an advantage in creative problem-solving. Another success was in combinatorial optimization: at a Quantum Technology Access Programme event in 2025, ORCA’s PT‑2 was used to solve the Steiner Tree problem (a network optimization challenge) much faster than a classical approach. Vodafone reported that the quantum solution – optimizing fiber network routes – came in minutes whereas classical algorithms would take hours or longer. Additionally, ORCA’s hardware has been used to run an 81-variable binary optimization task and to benchmark hybrid quantum/classical generative AI models for molecular chemistry, completing 25,000+ jobs without interruption on the NQCC testbed. These practical milestones, albeit in specialized tasks, are important evidence that ORCA’s machines are not just lab curiosities; they can perform lengthy computations with high reliability and contribute useful insights in real-world contexts.

Research Contributions: ORCA’s team, including co-founders who are respected quantum optics researchers, has kept one foot in academia, pushing the state-of-the-art in photonic quantum computing. Beyond the fault-tolerance research already described (fusion measurements, loss-tolerant architectures), ORCA has published on topics like new algorithms for solving QUBO (quadratic unconstrained binary optimization) problems with shallow quantum circuits and on using shallow bosonic circuits with classical machine learning assistance for optimization. They also have work on the interface of photons and atoms: e.g., demonstrating optical pumping of rubidium atoms in a hollow-core fiber (relevant for improving the quantum memory efficiency). Notably, ORCA was co-author on a perspective paper about “light in quantum computing and simulation,” providing thought leadership on photonic approaches. This steady output of peer-reviewed papers and preprints shows ORCA’s commitment to rigorous, primary research – something that lends credibility in a field where hype can sometimes outpace results. The company’s scientists are effectively solving the theoretical and experimental problems that directly feed into their hardware development. For example, the demonstration of photon-loss resistant protocols was not just a theory paper; it underpins ORCA’s claim that they have a roadmap to error correction. Overall, ORCA’s publication record and participation in collaborative R&D (such as the Eurostars consortium with Sparrow Quantum and Pixel Photonics to integrate single-photon sources and detectors into a full stack system) reflect a serious, academically grounded effort, not just a commercial push.

Industry Recognition and Investment: In the quantum tech community, ORCA’s progress has been noticed. In mid-2025, Technology Magazine named ORCA one of the “Top 10 Quantum Computing Companies”, a sign that its profile has risen alongside better-funded competitors. The company successfully raised multiple rounds of funding (including about £8-15 million in Series A and additional capital in 2023-24 for expansion, such as opening offices in Canada and the US). ORCA’s strategy of balancing near-term revenue (by selling PT‑1/PT‑2 units and services) with long-term R&D likely helped attract both government grants and venture investment. The partnership with the UK government’s NQCC, the contract with the US Air Force-supported MSU QCORE initiative, and projects with industrial giants like IBM (via a consortium on quantum networking multiplexing) underscore that ORCA is seen as a key player in photonic quantum computing. This external validation – be it through funding, partnerships, or awards – is part of its track record in demonstrating that a small startup can meaningfully contribute to quantum computing progress.

Operational Expertise: Through deploying multiple systems, ORCA has gained practical know-how in operating quantum computers in different environments. Each installed system likely contributes to their “knowledge hub” of best practices, from calibration routines to user training. For instance, ORCA provides an SDK and presumably documentation so that organizations like the University of Edinburgh’s Quantum Software Lab can start running experiments on the photonic hardware. The ease with which users can run tens of thousands of jobs (as reported) indicates that ORCA has built a relatively stable software/hardware stack around the photonic core. This operational track record is valuable and not trivial – many quantum prototypes require constant babysitting by their inventors, whereas ORCA is approaching a point where external users can execute tasks independently. It hints at a thoughtful full-stack development, from photonics up to the cloud interface.

In summary, ORCA’s track record combines technical milestones (delivering increasing qubit counts, novel photonic techniques) with practical deployments (10 systems serving real users) and innovative applications (quantum-aided AI, network optimization). It stands out that ORCA has footings in both worlds: it is an academic spin-out producing high-quality research and a startup delivering a product. Few quantum computing companies have navigated both aspects so effectively by 2025. This track record bodes well for ORCA’s future – the hands-on experience of deploying systems will inform the design of their next generations, and the trust built with early customers will likely translate into continued support. It also helps validate the photonic approach: every successful ORCA deployment is a proof-point that photonic qubits “have left the lab”. Given that ORCA plans an even more powerful PT‑3 system in the near future, one can expect their track record to grow with larger demonstrations (perhaps tackling bigger optimization problems or integrating error mitigation techniques).

For now, ORCA has established itself as a leader in photonic quantum computing by doing something quite rare: delivering on promises incrementally and learning from each step, rather than just promising a grand machine years down the line. As they pursue the next milestones (like quantum advantage for specific tasks by 2026), their current achievements provide a strong foundation.

Challenges

Despite ORCA’s impressive progress, significant challenges remain on the road ahead – challenges that are not just for ORCA but for photonic quantum computing (and indeed all quantum hardware approaches) at large. These include technical hurdles, scaling issues, and strategic/business challenges:

Scaling to Large, Error-Corrected Systems: The leap from a 90-qubit machine (PT‑2) to a fault-tolerant machine with perhaps millions of qubits is enormous. One core challenge is maintaining performance as the system size grows. ORCA has shown it can generate and manipulate on the order of 10-100 photonic qubits, but going to thousands will multiply complexity. The sheer number of components – single-photon sources, memory cells, beam splitters, switches, detectors – required for a useful universal quantum computer is daunting. For instance, a recent analysis by ORCA’s researchers emphasizes that millions of components will need to work in concert for a full FTQC, and one cannot rely on just a few configurations being reliable. Ensuring that each added component (or each additional module) doesn’t drastically increase error rates or losses is a huge engineering challenge. ORCA’s modular strategy alleviates some integration risk (you can test modules individually), but system integration at large scale will still be complex. Additionally, the error correction overhead for photonic schemes can be high: even with ORCA’s improvements, one might need dozens of physical photons for each logical qubit to guard against loss and gate failures. Managing that overhead without the system becoming unmanageable is a critical challenge. PsiQuantum, for example, estimates needing 1-2 billion physical gate events per second to run a useful algorithm on a million-qubit photonic machine. ORCA will face similar orders of magnitude. Reaching those numbers will require breakthroughs in source brightness, detector rates, and memory bandwidth.

Improving Component Efficiency and Reliability: ORCA’s success hinges on the performance of key photonic components, many of which are still advancing. Single-photon sources need to be bright, on-demand, and identical. The partnership with Sparrow Quantum suggests ORCA is integrating cutting-edge quantum dot sources that can deterministically spit out indistinguishable photons. However, quantum dot sources currently can have issues like multiphoton emission or dephasing; perfecting them for large systems is ongoing work (Quandela and Aegiq are tackling this too). Single-photon detectors need near-unity efficiency and low noise. Transitioning to superconducting detectors on chip (from Pixel Photonics) could provide >90% efficiency, but then one needs cryogenic cooling (unless novel on-chip amplifiers or upconversion are used). Each memory element (rubidium ensemble) must reliably store and retrieve photons with high fidelity and low loss – scaling the number of memory elements and controlling them individually (or in groups) will be challenging. Optical switches are another crucial component: ORCA’s plan to use fast optical switches (like electro-optic or MEMS switches) with low loss is ambitious. Traditional optical switches (like in telecom) often introduce 1-2 dB of loss and have switching times in the nanoseconds to microseconds range. To multiplex many modes, ORCA will need possibly large switch networks (think of optical cross-connects) – keeping losses low across many switch elements is non-trivial, and avoiding cross-talk or undesirable phase shifts is important for quantum coherence. The research team in Austin, TX that ORCA references is likely developing a novel photonic switching platform; delivering on that technology will be a critical enabler. In short, every percent of improvement in source efficiency, memory efficiency, or detector efficiency translates to a much healthier margin for the overall system’s error budget. ORCA will have to continuously incorporate these component-level improvements. This means a lot of R&D and likely collaborations with material scientists and nanofabrication experts – an area where the company must keep pace with global developments.

Photon Loss and Error Rates: Although mentioned above, photon loss is such a central challenge it bears repeating in concrete terms. For a photonic quantum computer to be fault-tolerant, the effective loss per operation must be below a certain threshold (perhaps on the order of 1% or less, depending on code). Currently, a single fiber coupler can have a few percent loss; a memory write/read might have 10-20% loss each way; a detector might miss 5-10% of photons. These losses multiply. ORCA’s recent protocol developments aim to handle some loss by redundancy, but fundamentally lowering the physical loss is also required. It’s a challenge of manufacturing and optics: better anti-reflection coatings, better coupling (e.g. edge-coupling fibers to chips, or coupling light into memory with high optical depth in the atomic ensemble), perhaps using new technologies like photonic crystal fibers (hollow-core fibers that can guide light with lower scattering into atoms). ORCA’s experiment with hollow-core fiber for Rb is one example of tackling the coupling loss problem by design. Nonetheless, as systems grow, ORCA must keep total loss under control – a 1% loss per component is fine for 10 components, but catastrophic for 1000 components in series. Achieving consistently low loss across all optical paths in a large system is a formidable engineering challenge requiring precision alignment and possibly active monitoring and feedback (to stabilize fiber connections or switch settings). Environmental factors like temperature drift can cause slight misalignments or phase shifts in fiber – ORCA might need to implement active stabilization for long fiber loops or interferometers as the complexity grows.

System Complexity and Control Software: ORCA’s current machines are already complex quantum-optical experiments packaged as products. As complexity grows, the classical control system (electronics and software that orchestrate lasers, modulators, detectors, memory gating, etc.) must scale too. Orchestrating millions of time-multiplexed operations per second, with conditional logic (to only release stored photons when others are ready), is a non-trivial real-time computing challenge. Essentially, ORCA has to build a real-time classical control computer alongside the quantum one. This involves FPGAs or fast digital/analog converters, precise timing systems (probably on the order of picosecond synchronization to align photon wavepackets), and software that can compile high-level quantum programs down to these timed operations. As algorithms become more complicated (especially error correction, which has feed-forward based on measurement outcomes), the control demands will increase. ORCA will need to ensure that their control electronics can keep up with the quantum hardware’s speed and parallelism. This might require innovations in FPGA design or even custom ASICs for specific tasks (like quickly processing detector signals and deciding which memory to release). Moreover, if multiple modules are networked, distributed control and calibration across modules becomes a challenge – phasing the lasers in two separate racks to interfere photons might require active phase stabilization across fiber links, for example. These kinds of system-level coordination issues will grow as ORCA scales out its networked approach.

Competition and Differentiation: On the business side, ORCA faces the challenge of differentiating its approach in a competitive quantum landscape. Heavyweight competitors like PsiQuantum and Xanadu, as well as emerging ones like Quandela, are all pursuing photonic quantum computing but with different technical philosophies. PsiQuantum aims for photonic chips in a cryostat and has massive funding; Xanadu pursues continuous-variable photonics with a unique squeezing-based architecture (and also has demonstrated a small error-corrected logical qubit recently). ORCA, comparatively smaller in funding, must continue to punch above its weight by leveraging its early mover advantage (deploying systems) and its partnerships. The challenge will be to not get leapfrogged if, for instance, a competitor suddenly demonstrates a 1000-qubit photonic chip with good fidelity. ORCA’s bet is on practicality and near-term usability – it will need to keep proving that usefulness (quantum advantage in tasks) to stay relevant. If another platform shows a clear general quantum advantage, ORCA will have to answer with its own or risk being seen as a niche player. The company’s strategy to focus on hybrid AI use cases is smart, but the landscape can shift quickly. Thus, continuing to show unique capabilities (like the memory-enabled demonstrations that others can’t easily do) will be important. Additionally, ORCA will likely need further funding to execute its roadmap through PT‑3 and toward fault tolerance. Securing that investment in a climate where multiple quantum hardware approaches vie for attention is a challenge. They may need to demonstrate a clear milestone (perhaps a specific quantum advantage or an error-corrected operation) to convince investors or governments to back the next phase.

Talent and Interdisciplinary Integration: Building a photonic quantum computer is arguably an interdisciplinary nightmare (in a good way) – it combines quantum optics, atomic physics, optical engineering, computer engineering, and software. ORCA will need to continue attracting and retaining top talent in each of these areas. The acquisition of the GXC photonics team in Texas indicates they are addressing the integrated photonics talent need. They also have strong academic ties through Walmsley and others to pipeline new PhDs in quantum optics. Still, as they grow, integrating teams across continents (London, Toronto, Austin offices) and across specialties can be tough. Ensuring that the atomic memory team is in sync with the photonic circuit team and the software team is a managerial challenge. Many startups stumble in scaling their organization as much as their technology. ORCA’s leadership will have to navigate this as the projects become more complex and multi-faceted.

Time and Physics Unknowns: Finally, there is the overarching challenge of the unknown unknowns. Quantum computing at scale is breaking new ground in physics and engineering. There may be unforeseen physical effects when one has, say, thousands of photons and dozens of memories interacting – e.g., subtle nonlinearities in fiber, or cross-talk in closely packed optical switches, or noise from one module affecting another. These effects might not appear until the system is larger than any ORCA has built yet. The company will have to remain agile in identifying and solving such problems. It’s a marathon, not a sprint: ORCA must maintain a strong R&D pipeline (as it has with its academic collaborations) to address fundamental issues that come up, even while pushing a product out.

In conclusion, ORCA Computing faces a set of daunting but surmountable challenges. The technical challenges of scaling photonic quantum tech – improving components, reducing loss, coordinating massive systems – are being actively worked on by ORCA and the broader research community. ORCA’s own publications indicate they are keenly aware of these issues and are trying to “prepare for the unknown” by keeping their techniques flexible. The next few years will likely test ORCA’s solutions to these challenges: we can expect to see, for instance, whether their approach to integrating memory and multiplexing truly does allow them to build larger systems with a smaller component count than competitors. We will also see if they can achieve a concrete quantum advantage in a real-world application (a milestone they target around 2026) to validate their strategy. Overcoming these challenges will require continued innovation and likely close partnership with research institutions and industrial photonics experts. If ORCA succeeds, it won’t just be a win for one company – it will be a vindication for the concept that photons, guided by fiber and enhanced by memories, can lead us to practical, scalable quantum computing.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap