Quantum Computing Companies

Quantum Brilliance

(This profile is one entry in my 2025 series on quantum hardware roadmaps and CRQC risk. For the cross‑vendor overview, filters, and links to all companies, see Quantum Hardware Companies and Roadmaps Comparison 2025.)

Introduction

Quantum Brilliance (QB) is an Australian-German quantum computing company (founded in 2019 as a spin-out of Australian National University) developing diamond-based quantum accelerators that operate at room temperature. Their hardware uses nitrogen-vacancy (NV) centers in synthetic diamond as qubits – defects in a diamond lattice where a nitrogen atom sits adjacent to a missing carbon. NV-center qubits are attractive because diamond’s rigid lattice protects the qubits from environmental noise, giving them the longest coherence times of any room-temperature quantum system. Unlike most quantum computers that require ultra-low temperatures or high vacuum, QB’s devices need no cryogenics or complex laser cooling, allowing them to be smaller, energy-efficient, and deployable in ordinary environments. Strategically, Quantum Brilliance envisions quantum accelerators that can be co-located with classical computing hardware (in data centers, HPC facilities, vehicles, satellites, etc.), integrating seamlessly as “quantum co-processors” in heterogeneous computing systems.

Milestones & Roadmap

2019-2021: Quantum Brilliance was founded in 2019 with initial research focused on NV-diamond quantum tech at ANU. By early 2021, the startup unveiled a “market-ready” prototype: a room-temperature quantum computer the size of a 19-inch server rack module. In April 2021, QB announced plans to install the world’s first diamond quantum accelerator at an HPC center (Pawsey Supercomputing Centre in Australia) as a field trial for hybrid quantum-classical computing. This vision included shrinking the device further (to a graphics-card form factor) by mid-decade, aiming for ~50 qubits in a compact module suitable for satellites or autonomous vehicles.

2022: In June 2022, QB achieved a world-first deployment: a room-temperature diamond quantum accelerator was installed on-site at Pawsey Supercomputing Centre in Perth. This Gen1 quantum accelerator was a 19-inch rack unit housing a small NV-center quantum processor (on the order of only a few qubits; in fact, Gen1 utilized 5 qubits in a rack-mountable package) paired with control electronics. The Pawsey trial demonstrated basic integration of the quantum module with a classical supercomputer (HPE Setonix HPC system) to test hybrid workflows. This milestone proved that a diamond-based QPU could run in a normal data-center environment and interact with HPC infrastructure in a “quantum accelerator” mode. It also provided a testbed for developing quantum-classical co-processing software and for collecting operational data on running a quantum device in an HPC facility. The successful Pawsey deployment was highlighted as a key step in Australia’s national quantum roadmap toward practical quantum technology.

2023: Building on that progress, QB secured significant funding (including an $18M round in March 2023) to accelerate hardware development and international expansion. The company established a presence in Germany (European HQ in Stuttgart) and forged R&D partnerships in Europe. Its stated goal was to improve performance and miniaturization of its room-temp quantum computers and to integrate diamond quantum fabrication into the semiconductor supply chain for scalability. By 2023, QB was already supplying early quantum accelerator systems to select customers “on-site,” and providing a software toolkit (the Qristal SDK and emulator) to let developers prepare for its hardware. Notably, in mid-2023 Quantum Brilliance partnered with the UK’s STFC Hartree Centre to explore scalability and integration of its accelerators for HPC use cases, and joined the QuantumBW initiative in Germany’s Baden-Württemberg region – reflecting government and academic support for its technology.

2024: This year saw second-generation hardware deliveries and strategic contracts. In late 2024, Quantum Brilliance delivered its Quantum Development Kit 2.0 (QB‐QDK 2.0) – a 19″ rack-mount quantum accelerator node – to major research institutions. For example, Fraunhofer IAF (Germany) purchased a QB-QDK 2.0, marking the first deployment of QB’s hardware in Europe. The QB-QDK 2.0 features an NV-diamond quantum processor integrated with classical co-processors (CPUs/GPUs) in a single box, enabling experiments with hybrid algorithms (quantum + AI/ML) on a tightly-coupled system. Fraunhofer IAF – a leading diamond research institute – had already been collaborating with QB on diamond quantum technologies (e.g. the “DE-Brill” project), and this deployment advanced their joint research into scalable, energy-efficient quantum computing solutions. Around the same time, the U.S. Oak Ridge National Laboratory (ORNL) acquired multiple QB quantum accelerators (ORNL was in fact the first worldwide to procure the QDK 2.0 earlier in 2024). ORNL planned to install three QB systems in parallel to investigate quantum parallelism for simulations (running simultaneous quantum workloads for molecular modeling, etc.) These deployments in top-tier labs validated QB’s hardware on an international stage.

Crucially, in September 2024 Quantum Brilliance (with partner ParityQC) won a high-profile €35 million contract from Germany’s federal cyber security agency (Cyberagentur) to develop the world’s first “mobile quantum computer” by 2027. This project, aimed at portable quantum technology for defense and security, underscored the advantage of QB’s room-temperature approach for field use. Under the contract, QB and Austria-based ParityQC are collaborating to deliver a transportable quantum prototype that can operate outside labs. QB was selected for its expertise in extreme miniaturization of NV-center hardware, and ParityQC for its quantum architecture and operating system designed for highly scalable NV-center processors. The partnership’s mandate is to demonstrate a compact quantum computer that can function in real-world conditions (for example, in mobile command centers or remote installations) by 2027, keeping Germany at the cutting edge of quantum tech. This timeline implies intermediate milestones: as QB’s CTO indicated, the company will move into full chip production and deliver “drops” of increasing qubit counts on chip in the 2025-2026 timeframe as stepping stones to the 2027 prototype. Also in late 2024, QB announced a strategic collaboration with Oak Ridge to co-develop applications for its accelerators, and it opened a subsidiary in Japan (with Tokyo Metropolitan Government support) to tap into the Asian markey.

2025 and Beyond: Entering 2025, Quantum Brilliance closed a $20M Series A funding round to further its R&D and go-to-market efforts. The company is increasingly focusing on the semiconductor integration aspect of its roadmap: it began working with IMEC in Belgium (a leading semiconductor R&D institute) to explore how diamond qubit fabrication can be made compatible with standard CMOS processes. The long-term goal is to embed diamond-based qubits into conventional chip fabrication workflows, potentially enabling mass-production of quantum processors that can be embedded on classical processor boards. Through a new EU-funded €18M project on diamond chip development (and the German Cyberagentur project), QB is designing next-generation devices with on the order of 25-100 qubits in a compact form factor. These upcoming systems (targeted for the late 2020s) will still be non-fault-tolerant NISQ machines, but at qubit counts where meaningful applications in areas like AI inference or molecular simulation become feasible.

Quantum Brilliance explicitly projects that by 2029 it will release its first commercially valuable quantum accelerator with on the order of 60-100 qubits. They refer to this performance threshold as achieving “quantum utility” – where a small, room-temp quantum co-processor can deliver some economic advantage on specialized tasks. Unlike competitors chasing thousands of qubits, QB’s stated aim for 2029 is a “tens of qubits” accelerator in a lunchbox-sized package that is low-cost and low-power enough to deploy in large numbers or in edge environments.

Beyond 2030, QB’s roadmap hints at further miniaturization (down to chip-scale devices that might eventually fit in mobile devices) and scaling via modular networks of these quantum accelerators, although specifics are not yet public. The overarching long-term goal is mass deployment of quantum accelerators – “from mainframe to mainstream” – enabling quantum computing to permeate everyday technology rather than residing only in specialized labs.

Focus on Fault Tolerance

Quantum Brilliance’s near-term strategy de-prioritizes full fault-tolerance in favor of first achieving “quantum utility” with small NISQ devices. Because current hardware has very limited qubit counts (single-digit qubits in prototypes, aiming for tens of qubits by 2027-2029), QB is not yet pursuing large-scale error-correcting codes or fully fault-tolerant architectures. “Obviously with very low qubit numbers, we are not aiming at [a] fully error-corrected architecture,” explained QB’s Head of Algorithms in 2025.

In the short-to-medium term, the company instead focuses on noisy intermediate-scale quantum (NISQ) processors and finding ways to extract value from them. One approach QB emphasizes is leveraging the unique deployability of its devices: because they are cheap, compact, and room-temperature, you could use many of them in parallel to compensate for limited qubit counts. For certain applications, deploying 10, 20, or 30 small quantum accelerators in parallel (e.g. in an HPC rack or distributed at edge nodes) could provide a speedup or throughput advantage, even if each individual device is not error-corrected. This parallelism strategy is a notable part of QB’s vision – other quantum technologies would find it impractical to run dozens of separate quantum processors simultaneously due to cost and power, but QB argues its diamond accelerators are so low-power and portable that clustering them is feasible. By running many noisy quantum units concurrently (or embedding them throughout a supercomputing facility), one can mitigate some limitations of each unit and explore aggregate quantum computing power without a monolithic device.

That said, QB acknowledges that fault tolerance is a necessary goal in the long run. The company has stated it is “building a roadmap for fault-tolerant quantum computing” – but this is a long-term, post-NISQ ambition. In practical terms, QB expects to introduce error mitigation and error correction techniques gradually once qubit counts allow. Their mid-to-late 2020s devices (tens of qubits) will likely remain uncorrected, but QB’s scientists anticipate exploring modest error-correction on “small computers” when possible. The partnership with ParityQC is one indicator of QB’s fault-tolerance ambitions: ParityQC specializes in quantum architectures that are scalable and in developing an operating system that can support error-corrected quantum computing on NV-center hardware. As part of the German mobile quantum computer project, ParityQC is co-designing the system architecture to ensure that as NV-center processors scale up, they can incorporate error-reduction techniques and potentially error-correcting logical qubits in the future.

In summary, QB’s current stance is that fully fault-tolerant quantum computing is beyond the scope of its first few product generations. The Gen1 and Gen2 diamond accelerators are inherently NISQ. The company’s focus is on maximizing what can be done with NISQ devices – e.g. hybrid algorithms, analog modes, or deploying multiple units – to reach a practical advantage without error correction. However, QB is laying the groundwork for fault tolerance in parallel. Their R&D in qubit fabrication and control is aimed at improving stability and gate fidelities (reducing error rates at the physical level). They have hinted at using the nuclear spin ancilla of NV centers for memory or error correction in future designs (an NV center contains a nuclear spin that could serve as a long-lived qubit or error-correcting resource). And as qubit counts increase into the hundreds (beyond 2030), QB would intend to implement quantum error-correcting codes on diamond processors. The end vision is a line of compact quantum accelerators that eventually become fault-tolerant through a combination of built-in error correction and networked scalability.

CRQC Implications

Cryptographically Relevant Quantum Computing (CRQC) refers to quantum machines powerful enough to break modern cryptographic algorithms (for example, factoring 2048-bit RSA). Reaching CRQC scale typically implies a universal, fault-tolerant quantum computer with thousands of logical qubits (likely millions of physical qubits given error-correction overhead). Quantum Brilliance’s approach, which prioritizes compactness and deployability over raw qubit count, is not primarily aimed at near-term CRQC. In fact, QB has often contrasted its strategy with that of the “quantum mainframe” builders: “We are not aiming at a million qubits on that chip,” said QB’s COO, “we are aiming at tens to 100 qubits [for our first products].” The company’s leadership projects that useful quantum accelerators will appear by 2029 with ~60-100 qubits, delivering specific commercial value, whereas a “general-purpose” large-scale quantum computer (of the kind needed for breaking encryption) “will have to wait a bit longer.” In other words, QB is not directly chasing a cryptography-breaking machine in this decade – their focus is on a different segment of the quantum landscape (edge and HPC acceleration).

In summary, Quantum Brilliance’s current hardware is far from a CRQC, and the company is candid that achieving cryptography-threatening scale is “too early to talk about” at this stage. Their plan of record is to reach “quantum utility” with NISQ accelerators and then gradually scale toward fault-tolerant machines in later generations. Should those later generations succeed in scaling up orders of magnitude (via modular cluster architectures or chip miniaturization breakthroughs), QB’s diamond-based qubits could form one pathway to a CRQC. But relative to other approaches, the NV-diamond modality will need significant breakthroughs to approach CRQC regimes, and QB seems to be strategically positioning itself as a complement to the large-scale quantum computers being developed elsewhere. Therefore, QB’s trajectory suggests it aims to fill niches that do not require CRQC-level scale in the foreseeable future, while leaving the direct CRQC race to players focused on maximum qubit count.

Modality & Strengths/Trade-offs

Quantum Brilliance’s hardware is built on the electron spin of NV centers in diamond. Each NV center acts as a qubit that can be initialized, manipulated (by microwave pulses, magnetic fields, and optical interactions), and read out (traditionally via photoluminescence). This modality has a unique set of strengths and trade-offs compared to other qubit technologies:

Strengths:

Room-Temperature Operation: The standout advantage of NV-diamond qubits is that they operate stably at ambient conditions. The NV electron spin has quantum coherence times in the millisecond range even at room temperature, orders of magnitude longer than most room-temp alternatives. This means no dilution refrigerators or vacuum chambers are needed, drastically reducing system overhead. QB’s accelerators essentially only require standard air-conditioned environment and a modest power supply (the control electronics), much like a classical server. This leads to small form factors and low power consumption: for example, the Gen1 device is a self-contained 3U rack unit and plugs into a wall outlet. Such a system can be installed on a desktop, in existing data center racks, or even on mobile platforms, which is impossible for cryogenic quantum computers. The room-temp operation also simplifies maintenance and potentially improves reliability (fewer mechanical components like cryocoolers). This strength enables QB’s vision of ubiquitous quantum computing – embedding quantum accelerators “anywhere a classical computer can go”, from factory floors to satellites, without special infrastructure.

Miniaturization & Portability: Because no bulky cooling or laser systems are needed, QB can pursue extreme miniaturization of the whole quantum processor. They have demonstrated that the quantum processor itself can be very small: in the latest lab prototypes, a 1×1×0.5 mm diamond holds multiple NV qubits. The remaining volume in the current “lunchbox”-sized accelerator is mostly taken up by control electronics (microwave drivers, FPGA control, etc.). As classical electronics shrink and integrate (potentially even into ASICs), the overall package can shrink further. QB’s long-term aim is to integrate the diamond NV chip with CMOS control circuitry in a single package, possibly even achieving chip-scale quantum accelerators. They’ve suggested that future models could be as small as a GPU card or smaller. The robust, solid-state nature of diamond qubits also contributes to portability – the devices are not vibration-sensitive or delicate in the way ion traps or optical setups are. This makes mobile and edge deployments viable (e.g. quantum accelerators in vehicles, drones, or field deployable units). In essence, QB’s modality trades off raw qubit count for ruggedness and flexibility in form factor.

Hybrid Integration & On-site Deployment: QB’s accelerators are designed to sit alongside classical computing resources. In fact, the QDK 2.0 nodes integrate NVIDIA GPUs and CPUs in the same box as the QPU, enabling fast data exchange between quantum and classical parts. This co-location is ideal for hybrid algorithms (e.g. quantum machine learning where a quantum circuit might act as one layer in a classical neural network). There’s minimal latency penalty compared to cloud-based quantum access, since the quantum engine is literally in the same rack as the CPU/GPU. This strength aligns with emerging frameworks like NVIDIA’s CUDA Quantum (formerly CUDA-Q), which QB uses to allow seamless programming of quantum+classical workflows. Moreover, because the hardware can be installed on-premises, data sovereignty and security concerns are eased for sensitive applications – users (like defense agencies or national labs) can keep quantum computations local rather than sending data to a remote cloud quantum computer. QB’s modality thus supports scalable deployments in HPC centers and enterprises, treating quantum accelerators as another server or accelerator card in the cluster. This is a different philosophy from the cloud-centric superconducting QCs, and it could play well in industries that require on-site quantum capability (for low latency or security reasons).

Long Coherence with Potential for Memory Qubits: The NV center’s electron spin coherence times (T₂) at room temperature can reach milliseconds (and with dynamical decoupling, in ultra-pure diamond, even ~0.1 s has been reported in research). Additionally, each NV has a nearby nuclear spin (of the N atom or surrounding C-13) that can serve as a long-lived quantum memory with coherence times up to seconds at room temp and much longer at low temperature. In principle, QB can leverage these nuclear spins for quantum memory or even built-in error correction (e.g. using an NV’s nuclear spin as a backup qubit to correct errors on the electron spin). The stability of the NV qubit also means fewer environmental errors – the diamond lattice shields it from vibrations and thermal fluctuations. This high intrinsic coherence is a boon for potentially achieving high-fidelity operations and is a key trade-off favoring NV centers despite their slower gate speeds (compared to superconducting qubits). It means NV qubits can perform many operations before losing coherence, which is essential for any error correction in the future.

Built-in Sensing and Novel Applications: A unique aspect of diamond NV centers is that they are also superb quantum sensors (sensitive to magnetic fields, electric fields, temperature, etc.). Quantum Brilliance has hinted at combining quantum computing and sensing in the same device. For example, a diamond accelerator on a satellite could both perform quantum computations and do in-situ magnetometry or inertial sensing. The integration of sensing might open up dual-use applications that other quantum computers cannot do (since other qubits are too isolated or require vacuum). This could be a strategic differentiator: QB’s room-temperature device could potentially directly process sensed analog data with quantum algorithms at the edge, useful for defense, navigation or scientific instruments. Such integration is speculative but supported by the platform’s versatility.

Against these advantages, several trade-offs and challenges are inherent to the NV-diamond modality:

Limited Qubit Count & Connectivity: The foremost challenge is scaling up the number of qubits. NV-center qubits are material-defect-based qubits; creating many of them in one chip with precise positioning is non-trivial. Unlike superconducting qubits which can be lithographically repeated, NV centers in a diamond are usually introduced via techniques like ion implantation or CVD growth with dopants, which historically yield random placements. Quantum Brilliance’s breakthrough lies in deterministically positioning NV centers with nanometer accuracy. They are exploring an STM lithography method to place N atoms and create arrays of NVs with ~5-10 nm spacing. However, this is at research stage; scaling to even tens of qubits with uniform, known spacing is challenging. The NV centers must be close enough (~5-10 nm apart) to enable two-qubit gates via magnetic dipole coupling, but not so close as to perturb each other uncontrollably. Achieving a large, regular lattice of NVs with ±1 nm precision in spacing is a “tall order” that current top-down implant methods cannot meet. QB’s proposed bottom-up fabrication (using STM and CVD) has shown early promise, but it needs to be scaled from placing a few NVs to placing potentially thousands. Until this is solved, QB’s qubit counts will lag behind some other platforms. Indeed, Gen1 had only 5 qubits; near-term devices on the roadmap (Gen2/Gen3) are targeting 10s of qubits.

Gate Speed and Fidelity: NV-center qubits typically use microwave pulses to drive single-qubit rotations (on the electron spin’s magnetic resonance) – these gates are relatively slow (microseconds duration) compared to superconducting qubit gates (tens of nanoseconds). Two-qubit gates for NVs often rely on the dipolar spin-spin interaction or coupling via a common mode (like a spin bus or optical photon). These two-qubit operations can be even slower (tens to hundreds of microseconds) and can suffer from cross-talk because nearby NVs might all respond to the same microwave fields unless carefully addressed. While NV qubits have long coherence, the ratio of coherence time to gate time is an important figure of merit. If gates take too long, noise can accumulate. Achieving high-fidelity entangling gates between NVs is an ongoing research challenge – it requires both precise placement and advanced pulse sequences (or using the nuclear spin as an intermediary).

Readout and On-Chip Integration: Traditional NV readout is optical: one shines a laser (usually green ~532nm) and detects red fluorescence, where the brightness depends on the spin state. This normally requires a photodetector (APD or photomultiplier) and often a microscope objective to collect light. Such a setup doesn’t scale to many qubits or to a compact device. QB has developed a “photoelectric readout” technique to integrate readout on-chip. While details are proprietary, it likely involves converting the NV’s state-dependent fluorescence into an electrical signal via a photodiode or other semiconductor structure bonded to the diamond. This is a novel solution, but still under development – as of 2025, QB’s lab devices with 2 qubits were still using a confocal microscope for readout (for testing purposes). The fully integrated photoelectric readout chip is planned for the next-generation prototypes. Ensuring this on-chip readout is fast, low-noise, and scalable to many qubits is a significant challenge. Each qubit might require its own photodiode or pixel sensor and associated amplifier, all of which must be built onto or beside the diamond without disrupting the qubit. Similarly, delivering microwave control signals to each NV at the nanoscale requires intricate nano-fabricated microwave waveguides or antennas on the diamond. QB has indicated it invented scalable nanoelectronic control layers for this purpose. Nonetheless, combining diamond, photonic, and electronic components in one semiconductor-style process is a complex engineering task. The partnership with IMEC suggests QB is leveraging established semiconductor process expertise to tackle this.

Manufacturing and Materials: Diamond is not a standard material in the semiconductor industry. Producing large, ultra-pure synthetic diamond wafers is difficult and expensive. NV centers require extremely pure (Type IIa) diamond with very low nitrogen content, except for the intended NV sites. Companies like Element Six can produce such diamonds, but scaling to larger wafers or volumes for mass production is non-trivial. Additionally, integrating diamond with CMOS tools means handling a material with very different properties (diamond is hard, has high thermal conductivity, and different chemical behavior). There may be challenges in bonding diamond to a substrate, doing lithography on it, etching features, etc. QB and IMEC are assessing whether diamond can be made “fab-friendly” or if alternate approaches (like a hybrid photonics process) are needed. If full CMOS compatibility is not achievable, they might use a specialized line (IMEC’s 200mm photonics fab) for diamond processing. This indicates a risk: the supply chain for diamond quantum chips is immature compared to silicon-based qubits. Ensuring consistency and yield in placing many NVs is as much a materials science challenge as a quantum physics one.

Throughput vs. Performance: QB’s philosophy is to use many small QCs in parallel for throughput on specialized tasks, rather than one very powerful QC. This is a different trade-off compared to companies seeking quantum supremacy on single computational tasks. The advantage is in scalability of deployment – you can imagine 50 quantum accelerators each with 50 qubits distributed in a supercomputer, versus trying to build one monolithic 2500-qubit machine. However, not all algorithms can trivially take advantage of parallel quantum processors. Many quantum algorithms (like Shor’s factoring or Grover’s search) are not easily parallelizable – they work on a single device with many qubits. QB’s approach will favor tasks that can be broken into independent quantum jobs or that require repeated quantum sampling that can be done concurrently. For example, running many instances of a variational quantum algorithm (VQE) with different parameters simultaneously on separate QPUs could speed up reaching an optimized result. Or searching a solution space with multiple small quantum annealing-like accelerators in parallel. The trade-off is that QB’s accelerators might not tackle individual hard instances as effectively as a larger universal quantum computer could, but they might excel at massively parallel quantum processing for the right applications. It’s a somewhat new paradigm, and one challenge will be developing software and algorithms that leverage distributed quantum computing on QB’s network of accelerators.

In summary, Quantum Brilliance’s diamond NV modality offers a compelling route to quantum computing that emphasizes practicality and integration (no cryogenics, small size, HPC-friendly), but it must overcome serious technical challenges in scaling and control to reach high qubit counts.

Track Record

Despite being a young company (founded in late 2019), Quantum Brilliance has built an impressive track record of technical milestones, partnerships, and credibility signals in the quantum hardware arena:

  • Delivered the World’s First Room-Temp Quantum Computer in a Supercomputing Center (2022): QB’s flagship achievement to date is the successful integration of its quantum accelerator at Pawsey Supercomputing Centre in 2022 – making it the first on-premises, room-temperature quantum computer installed in any supercomputing facility worldwide.
  • Commercialization of Gen1 Quantum Accelerators (2021-2023): QB moved quickly from prototyping to offering a commercial product (Gen1) to early adopters. By 2021 they had a “market-ready” 5-qubit system and by 2023 the Gen1 model was officially commercially available for purchase/rent. They established a sales pipeline wherein research labs and supercomputing centers could acquire a QB quantum node. The uptake by ORNL and Fraunhofer in 2024 of the improved QDK 2.0 shows that leading institutions trust QB’s tech enough to invest in it.
  • Global Research Partnerships: QB has formed an extensive network of collaborations across academia, government, and industry. In Australia, it emerged from ANU research and has worked with CSIRO (Australia’s national science agency) and Pawsey. In Germany, QB is part of the Quantum BW consortium (Baden-Württemberg’s quantum initiative) and collaborates with the University of Stuttgart and Fraunhofer IAF – leveraging Germany’s strong expertise in solid-state quantum devices. The partnership with ParityQC (Austria) brings in advanced architecture know-how for scaling NV systems. In the UK, QB partnered with the Hartree Centre (STFC) to explore scalability and industrial use-cases for its tech. In the US, it strategically allied with Oak Ridge National Lab, which both validates the tech and helps develop software for QB’s accelerators. Moreover, QB has a memorandum with NVIDIA: since 2022, NVIDIA and QB have worked closely (NVIDIA’s Tim Costa highlighted QB’s work co-locating quantum with GPUs as “leading the charge to useful quantum computing”). Collectively, these partnerships have helped QB punch above its weight. It also suggests QB’s technology has passed independent scrutiny by experts (since these partnerships often involve technical due diligence).
  • Government Support and Contracts: Quantum Brilliance has attracted notable government attention, which adds to its credibility. The German Cyberagency contract in 2024 (competitive €35M project) was a strong endorsement – QB/ParityQC were one of only three consortia selected, winning due to their “unique expertise” in miniaturized, NV-center quantum tech. The Tokyo Metropolitan Government’s subsidy in 2025 to establish QB’s Japan office is another vote of confidence, making QB the first Australian quantum company to be so recognized in Japan. In Australia, QB received funding from the government-backed Breakthrough Victoria fund and the national Main Sequence VC (affiliated with CSIRO), indicating alignment with national innovation agendas. QB also won the Manufacturing Innovation Award in Australia in 2022 for its work in quantum manufacturing. These signals show that QB’s vision of diamond quantum accelerators is taken seriously by public sector stakeholders and is being invested in as part of broader quantum technology strategies.
  • Technical Publications & IP: On the technical front, QB staff and collaborators have been publishing research that underpins their IP. For instance, in 2022-2023 QB researchers co-authored papers on the deterministic fabrication of NV centers using STM lithography, and on diamond nano-fabrication techniques, in journals like Materials for Quantum Technology. They have filed patents on key innovations (e.g., methods for NV placement, photo-electric readout, integrated diamond devices). This growing IP portfolio protects their know-how and also is peer validation of their science (the fact that ANU/QB’s method for atomically precise NV placement was published and highlighted in the community shows it’s a real advance towards scaling).
  • Fundraising and Growth: QB has raised over USD $40M (combined) by early 2025, including a $18M seed/Bridge round in 2023 and a $20M Series A in 2025. It also received earlier seed funding and grants (the Australian Financial Review reported the startup “banked $26M” AUD by Feb 2023). This war chest, while modest compared to some U.S. quantum giants, is significant for an Australian deep-tech startup. The ability to secure funding from both Australian and international investors (e.g. Germany’s Investible, and others listed) reflects confidence in QB’s roadmap. They have also expanded their global footprint to five countries (Australia, Germany, U.S. presence via ORNL partnership, UK, Japan, plus engagements in Belgium with IMEC), growing from a small Canberra team to a ~50+ person multinational team of physicists and engineers. This growth and backing bolster their credibility as one of the leading firms in non-superconducting quantum hardware.

Overall, Quantum Brilliance’s track record shows a pattern of delivering on incremental milestones and securing strategic alliances. They have installed working hardware in real-world settings (Pawsey, Fraunhofer, ORNL), likely making them the only quantum startup offering a room-temperature quantum computer “in the field” today. Their approach has attracted high-caliber collaborators and funding, suggesting that the community views diamond-based quantum accelerators as a plausible and important part of the quantum ecosystem.

Challenges

Despite its progress, Quantum Brilliance faces several significant challenges – technical, strategic, and industrial – on the road to achieving its long-term vision:

  • Scaling Qubit Count: The foremost technical challenge is scaling from the current single-digit qubit prototypes to devices with tens and eventually hundreds of qubits. As discussed, placing NV centers with the required precision is difficult. The recent ANU/QB research suggests needing better than ±1 nm accuracy and uniform alignment for NV spacing of ~5-10 nm – a level of control not yet attained in practice. Early results in controlled placement are promising but show only ~34% success rates for incorporating nitrogen where intended. To scale up, QB must greatly improve the yield of NV-center creation at specific sites. This likely involves iterating on STM lithography processes, refining CVD growth conditions, and maybe novel doping techniques. It’s an active research front – any delays or roadblocks in this fabrication method could slow QB’s roadmap. For instance, if after 2 years of work the process can only reliably place say 10 NVs, then the goal of a 50-100 qubit chip by 2029 might slip. Achieving a repeatable, high-throughput atomic-scale fabrication process for diamond remains one of the grand challenges in solid-state quantum tech.
  • Improving Multi-Qubit Gate Fidelity: Alongside qubit count, quantum gate performance must scale. QB will need to demonstrate high-fidelity operations as qubits increase. Currently, two-qubit gate fidelities in NV systems are likely well below those seen in superconducting qubits or ions (which exceed 99%). If QB’s 5-qubit system, for example, had gate fidelities in the 90-95% range, that would severely limit algorithm depth. To be useful, the next-gen devices (say 20-50 qubits) will need error rates per gate in the 1% or sub-1% range. This means investing in advanced control techniques: error-resistant pulse sequences, calibration algorithms, perhaps integration of error mitigation software. NV centers have some unique noise sources (e.g. nearby spins in the lattice causing decoherence) that need to be mitigated by careful diamond engineering (isotopically purified diamond to remove C-13 spins, for example, which QB likely uses). The company must also manage cross-talk – with qubits spaced only a few nanometers apart, microwave pulses might inadvertently rotate neighboring qubits unless finely localized. Engineering microwave delivery at the nanoscale (maybe via on-chip waveguides or stripline antennas) is complex. Each added qubit introduces more potential cross-talk paths, so scaling from 5 to 50 qubits could significantly complicate calibration. Therefore, scaling up without sacrificing control fidelity is a major challenge. If not addressed, the devices might end up with many qubits that cannot all be effectively utilized in computation due to errors.
  • Integration of Full Stack (Diamond + CMOS): While QB has outlined a plan to integrate nanoelectronics and photonics on the diamond chip, executing this is a formidable engineering project. The challenge is akin to building a hybrid quantum-classical chip. Any delays in the IMEC collaboration or technical incompatibilities could pose risks. For example, if they find that certain standard fabrication steps damage the diamond or the NV centers (perhaps high-temperature steps anneal out the vacancies, or metals introduce magnetic noise), they may need custom process flows which take time to develop. Even packaging the chip is challenging: one needs to get microwaves in, get optical or electrical signals out from dozens of qubits, and shield the qubits from stray fields – all in a small package. This might require innovative chip packaging techniques (maybe flip-chip bonding of diamond to a CMOS readout chip). Essentially, QB is attempting to pioneer a new class of quantum microprocessor; any number of unforeseen issues could arise (e.g., charge instability at diamond surfaces affecting qubits, or difficulties in fabricating photodiodes that align with NV emission, etc.). The risk is that the path to a fully integrated chip could take longer than expected, forcing QB to rely on semi-bench-top setups longer (which would hamper scaling and deployment). However, the German “mobile quantum computer” project by 2027 likely forces a timeline – they have to produce a self-contained prototype by then, which should drive integration efforts at a brisk pace. Failing to meet that deadline could be reputationally damaging, so QB has strong incentive to solve these integration challenges.
  • Competition & Technological Validation: Strategically, QB operates in a competitive landscape. While they have a unique angle, they must prove that diamond quantum accelerators can deliver useful performance before competing approaches render them obsolete. Companies like IBM, Google, IonQ, etc., are pursuing higher qubit counts and might achieve quantum advantage on some problems in the coming years. If, for example, by 2026 a superconducting processor demonstrates a clear advantage with 1000 qubits (albeit cryogenic), some may question the relevance of a 50-qubit room-temperature device. QB’s counter-argument is that their device can be deployed in ways others can’t (edge, embedded). Nonetheless, they need to find compelling applications for their smaller-scale QPUs to justify the technology. This is as much a software challenge as hardware – QB has a team (the “quantum applications/migration team”) working on algorithms tailored to their hardware’s strengths. They’ve hinted at exploring algorithms that might even be non-gate-based or variational, and targeting domains like machine learning, optimization, or simulation where 60-100 qubits could be useful. It’s a challenge to identify a “killer app” for a small quantum accelerator. If they fail to do so, there’s a risk that by the time their hardware is ready, general-purpose quantum computers might overshadow them. In essence, QB must validate that their approach yields quantum advantage in practical scenarios – perhaps showing, for instance, that a cluster of 10 QB accelerators can do something an HPC GPU cluster cannot. This validation is likely a few years away and is a critical milestone.
  • Market Education and Adoption: As QB’s own leaders have noted, the concept of “edge quantum computing” is very new and not widely discussed yetm. Many potential users do not even realize that a quantum computer could be portable or integrated on-site; the narrative in recent years has been around cloud-based access to quantum mainframes. QB has to essentially create a new market segment and convince customers of the value of having a quantum accelerator locally. This involves a lot of education, pilot projects, and proving out use cases. They have started with research customers (who naturally are interested in experimenting), but attracting industry customers (say an automotive company for in-car quantum accelerators, or a data center operator for on-prem quantum) is a challenge. Industries like defense are showing interest (hence the German project, and likely interest from Australian and U.S. defense circles) because of the strategic advantage of deployable quantum tech. Still, QB will need to deliver clear, easy-to-use solutions – including software, documentation, support – to lower the barrier for organizations to try their hardware on real problems. Ensuring the Qristal SDK and emulator provide a smooth on-ramp is part of this challenge, as well as demonstrating integration with frameworks like HPC schedulers or AI toolchains. Essentially, the company must transition from an R&D focus to a more product-oriented mindset to drive adoption.
  • Resource Constraints and Execution: Unlike some competitors backed by huge corporations or SPACs, QB operates with a relatively lean budget and team. Delivering cutting-edge hardware on tight timelines (2027 prototype, etc.) will strain resources. They need to attract and retain very specialized talent (quantum engineers, nanofabrication experts, etc.) across continents. The multi-site presence (Australia and Germany primarily) could pose coordination challenges, though it also taps into wider talent pools. As with any deep-tech startup, execution risk is high – hitting technical milestones often takes longer than anticipated. The good news is they have strong partners to lean on (Fraunhofer for materials, IMEC for fab, etc.), but those partnerships must be managed well. Any setback in fabrication (say a key researcher’s process fails, or IMEC’s trial runs don’t work) could delay the roadmap. They also rely on supply of high-quality diamond substrates; any hiccup there could be problematic (though there are a few suppliers globally).

In conclusion, Quantum Brilliance’s challenges revolve around the central issue of scaling: scaling the technology (more qubits, better gates), scaling the manufacturing (inserting diamond into mass production), and scaling the market (driving broader adoption). Each of these has sub-challenges – from atomic-level fabrication to persuading conservative industries to try a new approach. The company’s current trajectory shows awareness of these issues and proactive steps (e.g., research on fabrication, partnerships for manufacturing, and application development teams).

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap