Quantum ComputingQuantum Open Architecture QOAQuantum SovereigntyQuantum Systems Integration

Quantum Open Architecture (QOA): The “PC Moment” of Quantum Computing

Table of Contents

Introduction – From Lab Curiosities to Quantum “PCs”

In a historic Italian university lab that predates Newton, researchers are now running Italy’s largest quantum computer – not a sealed box from Big Tech, but a machine they built themselves by mixing and matching components. Eight centuries after its founding, the University of Naples Federico II is operating a 64-qubit system powered by a Dutch-made processor, assembled with modular parts much like a custom PC. This achievement exemplifies Quantum Open Architecture (QOA) – a transformative approach that is dragging quantum computing out of a closed, “mainframe-era” cocoon and into a vibrant, modular ecosystem.

Early quantum computers were like the mainframes of old: bespoke, vertically integrated marvels crafted end-to-end by a single team or company. In the 2000s and 2010s, if you wanted a quantum computer, you effectively had to build the whole stack yourself – qubits, cryogenics, control electronics, and software – or buy one from a provider who did. Companies like IBM and research labs at MIT or Delft operated this way, much as IBM did in the 1950s when it made every part of its early mainframe computers in-house. This vertical model jump-started the field but came at a cost: it was slow, expensive, and accessible to only a few.

Today, a sea change is underway. Quantum Open Architecture (QOA) is doing for quantum computing what the PC revolution did for classical computing – opening up the ecosystem. Just as the computing world shifted from monolithic mainframes to modular PCs with swappable parts, quantum tech is embracing modularity and specialization. Instead of one vendor building and owning the whole machine, different specialists provide the processor, the control systems, the cryogenic fridge, the software, etc., all designed to work together via common interfaces. This QOA approach promises faster innovation, lower costs, and wider access, heralding what many are calling the “PC moment” of quantum computing. In the words of IBM’s Jay Gambetta, even the industry leaders now admit: “I fundamentally don’t believe the future is a full-stack solution from one provider.”

But every “PC moment” has a less glamorous companion story: the rise of system builders. When computing unbundled, the winners weren’t only the chip makers and OS vendors – it was also the OEMs, VARs, and systems integrators who turned compatible parts into machines enterprises could actually deploy, secure, and operate. QOA creates the same missing middle for quantum. As the stack opens up, “Quantum Systems Integration” becomes a discipline in its own right: the craft of turning modular quantum components into a coherent platform – and, eventually, into production-grade infrastructure.

From Mainframes to Modular: How Quantum Open Architecture Emerged

In the early days, quantum computing was a cottage industry housed in physics labs. Pioneering groups built entire quantum setups from scratch – the qubits, the microwave electronics, even the sub-zero refrigerators required to make qubits behave. This provided ultimate control for experiments, but it was arduous and limited. Each lab (or early company) had to reinvent the wheel, and progress was slow and siloed. It mirrors the mainframe era of classical computing, when giants like IBM built every component of a computer in-house – an approach that worked initially but scaled poorly as complexity exploded. By the 1960s, classical computing learned that specialization was necessary; the quantum field is learning that now.

The turning point came as quantum projects grew in ambition. The complexity and cost of designing each piece of a quantum computer began to grow exponentially, making all-in-one efforts unsustainable. In classical terms, if early quantum labs were like the DIY hobbyist era, the field now faced its “Intel moment” – the realization that it’s smarter to focus on core strengths and source other components from specialists. One of the first signs was in cryogenics. In 2008 a Finnish startup, Bluefors, began selling ready-made dilution refrigerators to any lab that needed one. Suddenly, you didn’t need a PhD in refrigeration to get into quantum research – you could buy a state-of-the-art cryostat off the shelf. That was Quantum Specialization 101.

Over the next decade, more specialists appeared: companies like Qblox and Quantum Machines emerged to build dedicated quantum control electronics, relieving quantum computer builders from having to engineer every coax cable and waveform generator themselves. By 2021, QuantWare – a Dutch spin-out from TU Delft – launched as the world’s first independent supplier of quantum processing units (QPUs). In other words, you could now buy the “brain” of a quantum computer as a component, rather than develop your own superconducting qubits in-house. This was a game changer. As I previously put it, the quantum sector began shifting “from academia to industry” by breaking the problem into parts and handing those parts to specialized firms.

Why did Quantum Open Architecture (QOA) emerge? Several powerful forces converged:

Technical Complexity

As qubit counts and performance targets rose, no single group could master every aspect easily. Building a 100+ qubit machine means grappling with microwave engineering, nanofabrication, cryogenics, control software, error correction, etc. – it’s too much for one team.

Specializing allows each piece to be pushed further. This mirrors classical computing, where chips went from hand-crafted to requiring entire EDA software and foundries (you wouldn’t design a modern CPU without using Cadence or rely solely on one lab’s capabilities). In quantum, the “exponential difficulty curve” of each component demanded dividing and conquering.

Economics of Scale

Many quantum components have high fixed costs but low marginal costs, meaning it’s expensive to develop the first unit, but replicating it is cheaper. For example, designing a cutting-edge qubit chip or control system might require huge R&D investment and infrastructure, but once designed, you can manufacture multiple units at much lower incremental cost. This incentivizes companies to become suppliers – amortizing R&D over many customers – rather than each quantum computing effort reinventing the wheel for just one machine. A specialist can sell 100 control boxes or qubit chips and thus charge each customer less than if that customer had to fund a one-off design.

Speed of Innovation

With specialists competing and collaborating, each layer of the stack can advance faster. A focused team working only on, say, qubit calibration software can likely achieve breakthroughs more quickly than a jack-of-all-trades team balancing chip fabrication and software and integration at once. Competition in each layer – e.g. multiple control electronics firms vying to be the best – drives rapid improvements, which then benefit the whole ecosystem.

This best-in-class component approach is akin to how PC builders choose an Intel or AMD CPU, an NVIDIA GPU, etc., each advancing on its own roadmap.

Flexibility and Customization

Open architecture lets end users (researchers, companies) tailor a system to their needs. Rather than a black-box quantum computer where you take what the vendor gives you, QOA means you can pick a higher-coherence qubit chip from one source, pair it with control hardware optimized for fast feedback from another, use a specialized cryostat that fits your space or budget, and so on.

This modularity means quantum systems can be optimized or tweaked per application. For instance, one cryogenic system might be better for certain experiments (some are ultra-low vibration, others have bigger sample space, etc.); with QOA you have that choice.

Geopolitical and Sovereignty Drivers

Perhaps one of the biggest demand drivers for QOA has been national strategy. Governments and regions realized that quantum computing know-how equals strategic power, and they don’t want to be entirely dependent on foreign providers.

Quantum sovereignty – the ability to build and operate quantum technology within your own borders – is a growing priority in Europe, Asia, and elsewhere. An open architecture model is a perfect fit for this, because a country can leverage local industry strengths at each layer. For example, one country might excel at cryogenics, another at photonics, another at algorithms. By combining via QOA, each region can assemble a state-of-the-art machine without inventing everything alone. It also avoids reliance on a single foreign supplier; a government lab can choose a domestic cryostat vendor, a domestic software stack, etc., ensuring no single point of foreign control.

This was explicitly noted in the Netherlands and EU quantum programs, and by Israel in their quantum center (as we’ll see). Open architecture thus mitigates supply chain and security risks: if one supplier falters or is restricted, you can swap in an alternative thanks to standard interfaces. The approach aligns with national security interests by keeping critical quantum tech “in-country” or at least in allied hands.

Community and Democratization

Beyond governments, the research community worldwide benefits. Open architecture means advanced quantum hardware is not confined to IBM’s or Google’s labs. University groups and startups globally can assemble serious quantum machines by sourcing pieces from the open market.

This “democratization” effect engages far more talent in quantum R&D. It’s analogous to how the availability of PC components enabled countless innovators to build computers, spurring creativity everywhere. Quantum researchers from, say, Spain or Australia can now get a high-quality processor and control system and start experimenting, rather than waiting for cloud access to someone else’s machine. The result is a worldwide acceleration of quantum know-how. Indeed, the geographic distribution of one QPU vendor’s customers – 20+ countries – shows how far this has spread, allowing many regions to join the quantum race by assembling rather than inventing.

Industry Endorsement

Even the big players acknowledge the open trend. IBM’s stance above is telling, and companies like NVIDIA are partnering in open quantum initiatives (for instance, in Israel’s IQCC, NVIDIA worked with Quantum Machines on a hybrid quantum-classical setup). Meanwhile, Rigetti Computing, one of the early full-stack startups, initially built everything itself (even custom control hardware), but as the ecosystem matured, even Rigetti began adopting outside components and contributing to open software (like adopting standard software frameworks).

The industry sees that an ecosystem approach can grow the market bigger for everyone. A rising tide lifts all boats: if more quantum computers are built using parts from many vendors, there are more buyers for each vendor’s wares. This positive feedback loop – more specialization leads to better components, which leads to more machines built, which further lowers costs – is at the heart of QOA’s momentum.


In summary, QOA emerged because quantum computing got too important, too complex, and too global to remain an artisanal craft or a proprietary playground. Much like the classical computing industry’s shift to open architectures and horizontal markets (CPUs, memory, software all from different sources), quantum is undergoing a similar evolution. We’re witnessing the end of the quantum “mainframe” and the beginning of a more democratic, modular era – an era driven by necessity and nurtured by collaboration.

Inside the Quantum Open Architecture Stack (Hardware to Software)

What does a Quantum Open Architecture stack actually look like under the hood? In a traditional full-stack quantum computer (say, an IBM system), one company designs and tightly integrates everything. In a QOA system, by contrast, the machine is built from layers of components, often from different providers, unified by well-defined interfaces and standards. Let’s peel this onion from the bottom (hardware) to the top (user/software), mapping out each layer and its role. In this section I’ll keep vendor names to a minimum and focus on what each layer does, what “open” means at that layer, and where integration usually breaks. One way to read this stack is as an integration checklist. Every layer boundary is an interface, and every interface is a potential failure mode – technical, operational, or security-related. That’s why QOA’s success will depend as much on integrators as on component vendors. (If you want the full story of what “integration” really entails – platform integration vs. enterprise integration, and why it’s hard – see my deep dive on Quantum Systems Integration.) The next section is the companion piece: a field guide to the companies building each module and the new “prime contractor” role that stitches them together.

Quantum Processing Unit (QPU) – The Qubit Chip

At the heart is the quantum processor itself, typically a chip containing qubits (which could be superconducting circuits, trapped ions, photonics, etc., though superconducting transmon qubits are common in many open systems). In QOA, this is often a standalone component procured from a specialist. For example, Netherlands-based QuantWare produces superconducting QPU chips (their 64-qubit Tenor chip powers the Naples system) that can be purchased and plugged into a larger setup. This is analogous to buying an Intel or ARM CPU for a classical computer. The QPU provides the raw quantum bits and their basic interconnections. Open QPUs are designed to be compatible with standard control and packaging, so a lab or integrator can drop the chip into a dilution fridge, hook up control lines, and start running quantum circuits. Key concerns at this layer are qubit coherence (how long they maintain quantum state), gate fidelities, and qubit count.

In an open stack, if you want more qubits or a different technology, in principle you could swap or upgrade the QPU without changing the rest of the system – provided interfaces (pin configurations, communication protocols) are standardized.

In QOA, the QPU becomes a procurable module – closer to a “CPU you can buy” than a secret recipe. That single shift changes everything: it turns the hardest part of the machine into something an integrator can source, qualify, and swap over time. We’ll map the major QPU suppliers and modalities in the ecosystem section; for now, the key point is that QOA treats the QPU as a replaceable core, not a monolith.

Cryogenic Systems & Support Hardware

Most qubit technologies (superconducting, spins, etc.) need cryogenic refrigerators – super-coolers that keep qubits at millikelvin temperatures. In QOA, the cryo system is typically a separate module. The leading supplier here is Bluefors (Finland), whose dilution fridges are ubiquitous and were one of the first commercial products adopted by nearly every quantum lab. Bluefors essentially enabled startups and labs to skip building their own fridge and buy one, greatly accelerating experiments.

Cryogenics is where quantum reminds you it’s still closer to a particle-physics instrument than a server rack. In a QOA world, the cryostat, wiring, filters, packaging, and mechanical constraints form a “cold chain” with its own compatibility rules. The practical sovereignty lesson here is blunt: if your cold chain is proprietary, your whole system is proprietary – even if the QPU is “open.” The ecosystem section calls out the key suppliers; what matters at this layer is that interfaces become mechanical + thermal + RF standards, not just software APIs.

Control Electronics

Above the fridge, you have classical hardware that controls the qubits – generating microwave pulses, reading out qubit states, and managing feedback. This layer in a QOA stack is often provided by dedicated control system vendors.

In an open stack, the control layer is where modularity either becomes real – or becomes a myth. The hard problems are synchronization, latency, noise, and “scale economics”: moving from a lab’s pile of instruments to a repeatable control plane that can grow from five qubits to fifty without becoming a cabling horror show. The vendors building this layer differ in philosophy (latency-first vs. modularity-first vs. instrument-grade precision), and that choice has second-order consequences for calibration automation and error-correction readiness – which is why it’s worth treating the control plane as a strategic procurement decision, not a commodity.

We’ll name the major control-stack players in the ecosystem section.

Middleware and Quantum Firmware

Sitting between hardware and user-facing software is a crucial glue layer: the calibration, optimization, and operating software that makes the hardware usable. This includes quantum firmware (for tuning up qubits, suppressing errors) and middleware for job scheduling, resource management, etc.

This layer is the quantum equivalent of BIOS, drivers, and automated diagnostics – the unglamorous software that determines whether a machine is an instrument for experts or a platform others can actually use. In QOA, firmware and automation become even more important because they’re the force multiplier that lets a broader set of teams operate hardware they didn’t design. It’s also where openness pays compounding returns: improvements can propagate across many installations that share similar module interfaces, rather than being trapped inside one vendor’s walled garden.

Hybrid integration middleware matters too, but we’ll treat it in the ecosystem section under integrators and platforms, because it’s as much an operations story as a software story.

Standards and Interfaces

Glueing all these layers together are evolving standards – both formal and de facto. In the classical PC world, standards like PCI, USB, x86 instruction sets, Ethernet and so on enabled interoperability. Quantum computing is young, so standards are still forming, but the open architecture push is accelerating them:

  • Hardware interfaces – There’s movement toward standard microwave connectors, qubit control signal formats, and fridge pin layouts. For instance, some consortia discuss standardizing the “quantum socket” – how a chip mounts and connects to a fridge and control lines (e.g., a common chip packaging so any compliant chip could fit any compliant fridge slot). Companies like QuantWare have proposed an open standard for QPU chip packaging (since they offer foundry services, they want others to design chips that can be fabricated and used interchangeably).
  • Software APIs – Many control systems now support common frameworks like Qiskit, Cirq, or Q#: you can program a QOA machine using these higher-level languages such that the same code could run on different back-ends. Also, initiatives like QIR (Quantum Intermediate Representation) by the Quantum Open Source Foundation and Microsoft aim to define a standard low-level instruction set for quantum programs, so that compilers and hardware can talk in a common language.
  • Networking and protocols – For multi-module systems (like multiple quantum chips networked together, or connecting quantum and classical nodes), there are efforts like Quantum IPO (for networking protocols) or simply using existing HPC standards (MPI) for hybrid jobs. Israel’s IQCC integrated with AWS cloud using standard cloud APIs, showing that the quantum part can be treated as another resource in a larger IT framework.
  • Crucially, open architecture necessitates standards – and indeed, the rise in component providers is forcing the maturation of interfaces. The more players you have, the more everyone benefits from agreeing on how to connect things. We’re seeing the beginnings of “plug and play” quantum hardware. For example, if you buy QuantWare’s QPU and Qblox’s control system, they’ve ensured the two integrate smoothly (cabling, signal levels, and software) because of collaborations and emerging interface norms. As QOA systems proliferate, expect formal standards bodies to emerge (perhaps under IEEE or ISO) for certain aspects of quantum hardware/software interoperability.

User Interface and Cloud Layer

At the very top, how does a user interact with an open quantum computer? Increasingly, through cloud platforms or standardized interfaces. For instance, the Tuna-5 machine is accessible via Quantum Inspire, a cloud portal that also hosts other Delft processors. Users can log in and run jobs on Tuna-5 using Python APIs or web interfaces, without worrying that under the hood that system has parts from five different vendors. Similarly, IBM’s quantum cloud and AWS Braket abstract the specifics of hardware behind a common interface. But in an open model, one could imagine a universal cloud interface where hardware from different vendors appears side by side. The key point: QOA doesn’t mean a user has to manually handle each component – integrators provide an experience where the machine functions as a cohesive whole. In practice, that cohesive experience isn’t just UX – it’s operability: identity and access, segmentation between control planes and workloads, logging and audit trails, patching and change control, uptime targets, and a cost model that survives beyond the first demo. This is where Quantum Enterprise Integration (one half of Quantum Systems Integration) quietly determines whether QOA stays a lab triumph or becomes real infrastructure. It’s just that internally it’s built openly. Cloud access and common SDKs (like Qiskit, Cirq, or Braket’s Python SDK) ensure that from a user’s perspective, running an algorithm on a QOA system is as straightforward as on a proprietary system.


To summarize this stack: each layer – QPU, cryogenics, control hardware, firmware, integration software, user interface – can be provided by specialized companies and then pieced together. The result is a full quantum computer, but one that is “open-source hardware” in spirit. Different labs or integrators might choose different combos (maybe an American lab uses an Australian cryostat with a Canadian control system and a Dutch QPU), but as long as standards are respected, they can all work together. This modular stack approach is fueling a quantum ecosystem where many companies can contribute and benefit, rather than a winner-takes-all model.

Key Players in the QOA Ecosystem (Who Does What)

If the previous section was the anatomy of a QOA system, this is the economy: the emerging supply chain and the companies trying to become the “Intel,” “ASUS,” or “Microsoft” of quantum modules. To avoid repeating the stack, I’ll organize the ecosystem by commercial role and integration leverage – who sells the core module, who controls the interfaces, and who ends up owning the customer relationship.

The QPU merchants (qubits as a component)

Suppliers of the quantum chips themselves.

  • QuantWare – Founded 2021 in the Netherlands, provides off-the-shelf superconducting transmon QPUs (e.g. 5-qubit “CobALT”, 17-qubit “Contralto”, 64-qubit “Tenor”). QuantWare aims to be the “Intel of quantum”, focusing purely on making powerful qubit chips for others to use. Their processors are used in several open systems (Israel’s IQCC 25-qubit, Naples 64-qubit) and they emphasize scalability (developing 3D chip tech for >100 qubits).
  • Rigetti Computing – A pioneer in superconducting qubits, based in the US. Rigetti initially vertically integrated, but its chips (like the 40-qubit Aspen series) are well-regarded and used via cloud (they could theoretically be offered as components). Rigetti’s early need to build custom control hardware in-house underscored the need for specialist support that emerged later.
  • IBM – While IBM is the epitome of a full-stack provider, interestingly IBM has begun partnering (e.g. IBM’s Qiskit works with other hardware, and IBM has open-sourced much of its software). They publicly acknowledge the future likely involves mixing technologies. IBM’s own breakthroughs (like 127-qubit and 433-qubit chips) remain in-house, but interfaces like Qiskit Runtime could allow them to connect with external pieces (IBM has talked about hybrid cloud integration, etc.).
  • IonQ, Quantinuum, Pasqal, etc. – These are full-stack makers (ion trap, ion trap, neutral atom respectively). While not “open architecture” in offering components, they are part of a broader ecosystem. It’s possible in future that an IonQ trap could be integrated as a module in a larger system (for instance, a photonic network connecting multiple smaller quantum processors of different types, each possibly from a different source – a far-future open architecture concept).
  • Research fabs & foundries: We should also mention initiatives like Europe’s quantum foundries (e.g., IMEC, CEA) working with companies to produce chips for multiple parties – which supports the QOA idea of chips as a product.

The cold chain (fridges, wiring, packaging)

Keeping qubits in their delicate low-temperature, low-noise environment.

  • Bluefors – The market leader in dilution refrigerators. Their units are essentially in every major superconducting quantum lab. Bluefors fridges are known for reliability and capacity (some models support hundreds of wiring lines for large qubit counts). They kick-started the commercialization of quantum by selling fridges starting in 2008.
  • Oxford Instruments/FormFactor – Oxford Instruments (now under FormFactor’s umbrella for quantum) also has a long history of providing cryostats and still does, especially for small-scale and research uses (they have “Proteox” cryostats, etc.). They also offer probing systems to test quantum chips on wafer – important for integrating the manufacturing pipeline.
  • Maybell Quantum – A newer entrant focusing on specific-use cryostats, possibly smaller dilution fridges for companies or labs that need dedicated, maybe more affordable systems. Mentioned as part of the new horizontal specialization.
  • Kiutra – Specializes in cryogen-free cooling techniques (ADR – adiabatic demagnetization refrigerators) that can reach 10 mK range. While ADRs aren’t yet common for quantum computers (most use dilution), Kiutra’s tech could complement or provide easier operations in some cases (no helium).
  • Delft Circuits – As noted, provides cabling solutions (their “Cri/oFlex” cables are used in many projects).
  • Lake Shore Cryotronics, Montana Instruments – Make some specialized cryostats and instrumentation (for smaller quantum experiments, like photonic or spin qubit setups).
  • Quantum Cleanrooms & Testbeds: Some companies (or institutes) provide facilities as a service – not quite a product company, but worth mentioning. For instance, Quantum Foundry at UC Santa Barbara or TNO’s NanoLab in Delft allow small companies to fabricate devices without building their own fab – facilitating specialized QPU development (part of open ecosystem enabling).

The control plane (latency, feedback, scalability)

Nerves and signals connecting classical and quantum realms.

  • Qblox – Delivers full-stack control hardware in modular form. Their control stack was used in Tuna-5 and is part of the QUB reference design. They emphasize scalability (recently demonstrating their modules on 20+ qubits setups, aiming for 50+). Qblox often collaborates with QPU makers to ensure compatibility.
  • Quantum Machines (QM) – Known for OPX control systems. They champion an architecture-first approach, heavily promoting the idea of modular, hybrid quantum-classical computing. QM’s control hardware is at the heart of Israel’s IQCC, orchestrating both superconducting and photonic processors together. They also integrate with HPC (their partnership with NVIDIA for DGX Quantum). With their new OPX1000, they’re preparing for 1000+ qubit control, which will likely be part of large open systems.
  • Zurich Instruments – Still frequently used in labs for certain tasks (their UHFLI lock-in amplifier or SHFQC can control a few qubits). Now allied with R&S which might push toward larger integrated solutions.
  • Keysight – Has a quantum control system (Keysight Quantum Control System, combining AWGs and digitizers with software) used in some startup labs. Keysight also supplies high-quality analog electronics that can integrate if needed.
  • Teledyne SignalCore, etc. – There are niche players selling microwave sources that can be part of custom setups.
  • Integration efforts: There are attempts to standardize control stacks – e.g., OpenQASM and quantum assembly languages that any controller can accept. QM, Qblox, and others typically allow uploading pulse sequences via Python libraries or QASM-like scripts, which again means researchers can swap out controllers without rewriting all their experiment code.

The reliability layer (calibration, error suppression, orchestration)

Brains and algorithms to manage qubits and run computations.

  • Q-CTRL – As detailed, leading in error reduction and automation. They effectively provide a software overlay that can make a given hardware perform better, which is extremely valuable in a modular context (no matter whose qubits or hardware, you can often benefit by adding Q-CTRL’s controls). Q-CTRL’s tools have been used on hardware from IBM, Rigetti, and others, illustrating cross-platform utility.
  • QuantrolOx – Focused on a narrower (but crucial) task of automated tune-up. Their AI-driven system can drastically shorten the time to get a new quantum chip functioning (a process that used to take human experts days or weeks). They align well with QOA because a customer buying a third-party QPU might lack the deep expertise to optimize it – QuantrolOx fills that gap as a pluggable solution.
  • Classiq – Provides an abstraction layer for creating quantum algorithms and circuits without needing to code at the gate level. At IQCC, they provided high-level software so users can focus on problems rather than low-level details. In an open ecosystem, one could use Classiq with different backends interchangeably.
  • Strangeworks, Zapata, Infleqtion (ColdQuanta,Super.tech) – These and others provide software for algorithm development, error mitigation, or scheduling that can sit on top of quantum hardware. Many are hardware-agnostic, which naturally complements an open hardware ecosystem.
  • Compilers & Runtime: Projects like OpenQL (from QuTech) or XACC (framework by Oak Ridge) aim to be universal compilers/runtimes for quantum programs. They help bridge the gap between user code and diverse hardware by providing a common compilation pathway. This is more on the research side now, but as QOA grows, such middleware will gain traction.
  • Operating Systems & Lab Management: Orange Quantum Systems (with Juice, as described) and others (see also Topological Industries or university projects) are exploring OS-like layers for quantum computers. Another example: qOPX+ by QM is essentially an embedded real-time OS for experiments. Expect more development here as labs demand better coordination tools for complex setups.

Quantum Systems Integrators – the primes (integration, warranty, operations)

If QOA is the unbundling of the quantum computer, Quantum Systems Integrators are the rebundlers. They take a multi-vendor stack and deliver a single accountable platform – commissioned, validated, documented, and supportable.

In practice, this splits into two related jobs: Quantum Platform Integration (assembling and commissioning the machine itself – the cold chain, control plane, firmware, and interfaces) and Quantum Enterprise Integration (embedding that machine into HPC, security, governance, and day‑2 operations). I’ll go deeper on this in a dedicated Quantum Systems Integration article, because QOA’s “PC moment” won’t scale without it.

Some of the “general contractors” who put all the pieces together into a functional whole:

  • Applied Quantum (Global) – (Disclosure: this is my company). A new entrant focused on Quantum Systems Integration as production infrastructure: integrating modular/QOA platforms and embedding them into enterprise and HPC workflows with security, compliance, export‑control awareness, operability, and total cost-of-ownership discipline. The positioning is simple: many teams can assemble a quantum stack; far fewer can make it production-ready.
  • ParTec (Germany) – A veteran in classical HPC system integration, now applying that expertise to quantum-classical hybrids. ParTec’s QBridge software and involvement in projects like Jülich’s quantum system show their role as an integrator bridging HPC and quantum. They ensure that a quantum processor can be embedded in a supercomputing center’s workflow securely and efficiently. ParTec essentially acts as the glue and orchestrator, dealing with both hardware integration and software scheduling.
  • TreQ (Netherlands) – A newer specialist explicitly positioning as a quantum system integrator. They collaborate with various component makers to deliver turnkey quantum systems to end-users (like research labs or companies that want an on-prem quantum computer but don’t want to assemble it themselves). TreQ is involved in Dutch projects ensuring all parts (qubits, electronics, software) work together. They exemplify how integration itself is becoming a business and a technology: knowing how to “mix and match” components optimally is a skill set.
  • Elevate Quantum / TechHubs (USA) – Organizations like Elevate Quantum in Colorado (a U.S. government-designated tech hub) serve as integration and demonstration sites. They partner with component vendors to host reference quantum systems (like the upcoming QUB-based Q-PAC platform) and make them available to users. These act as proving grounds for QOA – a place where a complete system is maintained and users can try it, without each user having to build from scratch.
  • Large IT Integrators: It wouldn’t be surprising to see big systems integrators (Accenture, Capgemini, etc.) eventually step in to help enterprises deploy quantum systems by sourcing components. Already companies like HPE are dipping toes (HPE is working on a project with European partners to integrate quantum accelerators with supercomputers). HPE’s recent launch of the “Quantum Scaling Alliance” to help data centers adopt quantum is a hint of future integrator roles.
  • Quantum Cloud Providers: While not an integrator in the physical sense, platforms like AWS Braket, IBM Quantum, Microsoft Azure Quantum integrate virtually multiple types of hardware under one service. For example, Azure Quantum offers access to IonQ, Quantinuum, and soon QCI hardware through one interface. This is a form of “open architecture” at the cloud level – it doesn’t mix hardware into one machine, but it gives users a unified way to tap various hardware. This may evolve such that these clouds eventually allow hybrid usage (e.g., use an IBM QPU and an IonQ QPU in the same workflow).

The long tail and standards makers

The above lists are not exhaustive. Many startups globally are tackling pieces of the stack:

  • Hardware niche: e.g., SeeQC (USA) integrating classical control on chip, Oxford Quantum Circuits (OQC) (UK) making a full system but potentially could provide components like their 3D coaxmon chip, C12 (France) focusing on carbon nanotube qubits that could be inserted in modular fashion, etc.
  • Photonics ecosystem: companies like Xanadu, QuiX, ORCA supply photonic hardware that might be plugged into larger systems (Xanadu’s chips could act as specialized accelerators, for instance).
  • Error correction and middleware: startups like Riverlane are developing software for error correction (their Deltaflow.OS aims to be a layer for managing qubits and error correction across different hardware).
  • Testing and measurement: companies that provide specialized filters, amplifiers, current sources for qubits.
  • Standardization groups: Communities like QED-C (Quantum Economic Development Consortium) in the US and QuIC (Quantum Industry Consortium) in EU help coordinate between these players, which indirectly fosters standard interfaces and partnerships – the backbone of QOA.

Each of these companies focuses on one piece of the puzzle. In the quantum mainframe era, one might have tried to do it all (indeed, Google builds its own chips, controls, etc. internally). But QOA flips that script: each firm perfects its piece, and the pieces are combined. As a result, we’re seeing a collaborative industry structure take shape. No single company has to “boil the ocean” anymore; instead, they contribute to a supply chain.

Importantly, this doesn’t mean it’s easy – coordinating these pieces is hard (as we’ll discuss in challenges) – but it’s a fundamentally different model from the vertically integrated approach. It’s akin to how the personal computing industry evolved: at first, a company like IBM did everything; later, you had Intel making CPUs, Microsoft making OSes, Seagate making disks, etc., all working together (sometimes uneasily, but effectively). Quantum tech is now at that inflection point, with these players above forming a nascent quantum supply chain.

QOA in Action: Case Studies and Milestones

Theory and promises aside, how is Quantum Open Architecture working out in practice? Let’s look at several pioneering projects around the world that have demonstrated QOA principles – assembling quantum computers from modular components, often as first-of-their-kind achievements for their region or type.

Tuna-5 (Netherlands) – A Homegrown Modular Quantum Computer

One flagship example is Tuna-5, unveiled in May 2025 in Delft, Netherlands. Tuna-5 is a 5-qubit superconducting quantum system built entirely with components from the local Dutch quantum ecosystem and made available to the public via the cloud. The system was developed under the national HQ/2 (HectoQubit/2) project by a collaboration of QuTech (a leading quantum research institute), TNO (a Dutch research organization), and four startups – QuantWare, Qblox, Orange Quantum Systems, and Delft Circuits. Each contributor supplied their piece: QuantWare fabricated Tuna-5’s quantum chip with five transmon qubits and flux-tunable couplers (hence the fishy “Tuna” nickname). Qblox provided the control and readout electronics, Orange QS delivered the quantum operating system and software toolkit, Delft Circuits contributed the cryogenic wiring, and QuTech/TNO integrated everything and handled the cloud interface.

Tuna-5 is essentially a proof-of-concept “open architecture” quantum computer. It’s not aiming for record qubit count, but rather demonstrates that a fully functional quantum computer can be built by combining modules. It’s hosted in the DiCarlo Lab at QuTech and accessible on Quantum Inspire, meaning any researcher or student can run experiments on this Dutch-built machine. The project serves as a “system-readiness benchmark” – basically a testbed to iron out integration issues and pave the way for larger machines. Indeed, Tuna-5 aligns with the EU’s OpenSuperQPlus program targeting a 100-qubit open system by 2026, with Delft slated as one of the demonstrator sites.

A few notable aspects of Tuna-5’s design: the use of flux-tunable couplers on the chip allows dynamic control of interactions between qubits, an advanced feature to improve gate fidelity. The extensive calibration know-how from QuTech’s research was fed into the system; for example, strategies to minimize crosstalk between qubits were implemented by tweaking controls and coupler settings. This shows how feedback between research and product happens in QOA – the academic lab provided know-how that improved the startup components, and in turn the startups’ products enhanced the lab’s capabilities. Tuna-5’s success strengthened the Dutch supply chain by revealing and then addressing integration challenges, effectively stress-testing the interoperability of the components. It’s a stepping stone to bigger things: as of 2025, a scaled-up prototype with more qubits (and Delft Circuits’ new cabling) is in parallel development, informed by Tuna-5’s results.

Symbolically, Tuna-5 represents quantum sovereignty for the Netherlands. Rather than relying on an imported machine, the Dutch ecosystem collectively built one. It reinforces national capability and provides a platform for local researchers and startups to experiment freely. Tuna-5’s launch was celebrated as a milestone that “showcases a fully integrated quantum system built using an open-architecture approach and leveraging the Delft supply chain.” By proving out a small modular quantum computer, the project gives confidence that larger systems (like the planned 50-100 qubit Dutch/EU machines) can be assembled from European parts without starting from zero.

Israeli Quantum Computing Center (Israel) – An Open Quantum Facility for All

While Tuna-5 is a single machine, the Israeli Quantum Computing Center (IQCC) is an entire facility designed with open architecture principles. Opened in June 2024 at Tel Aviv University and operated by Quantum Machines (QM) with government backing, the IQCC is a “quantum and HPC center” that houses multiple quantum computers of different types under one roof. It’s one of the first places in the world where you can find, side by side, a superconducting qubit system and a photonic quantum system, both accessible to external users. All of these are integrated with a classical supercomputer on-site and cloud connectivity.

IQCC’s ethos is captured by QM’s CEO Itamar Sivan, who said the center was built to be “the most advanced facility in terms of interoperability, modularity, and integration with HPC and the cloud”, and that an “open architecture approach will ensure the facility can be continuously upgraded and scaled to stay at the cutting edge.”. In other words, the IQCC is designed not as a static installation but as a plug-and-play environment – new quantum devices can be added, control hardware can be swapped or upgraded, HPC connections can grow, all without rebuilding from scratch. This is critical because quantum tech is evolving fast; an open architecture facility can adapt as new breakthroughs come (much like a modular data center can replace servers over time).

At launch, the IQCC featured a 25-qubit superconducting quantum processor from QuantWare (their Contralto chip) as one computational element, and an 8-qumode photonic quantum computer from ORCA Computing as another. Both are run through Quantum Machines’ OPX+ control systems, providing a unified control interface despite the different qubit modalities. This demonstrates the power of a good control platform in open architecture – QM’s electronics and software can speak to superconducting qubits (turning microwave pulses into quantum gates) and separately to a photonic setup (managing laser pulses or optical switches), all while coordinating with each other and the classical HPC cluster. The center also leverages NVIDIA’s DGX Quantum platform – essentially a high-end classical GPU system tightly coupled with QM’s controller – to enable hybrid algorithms where GPU-heavy computing and quantum computing occur in tandem with minimal latency. The classical side includes ARM and AMD CPUs plus connections to AWS cloud, making the facility one big quantum-classical sandbox.

What’s the purpose of IQCC? It serves multiple roles:

  • Open testbed for developers: A startup or university developing their own quantum chip can bring it to IQCC and plug it into a world-class infrastructure for testing, instead of having to build a million-dollar lab themselves. QM’s CTO Yonatan Cohen noted that before IQCC, an innovator “would need to build their own testing setup, costing millions”, whereas now they can “plug their chip into our testbed… accelerating their development and reducing costs significantly.”. This lowers the barrier for hardware innovation – you don’t need a whole lab, just a chip to test, the center provides the rest.
  • Education and talent development: Israeli academia and startups get priority access to the center, giving students and engineers hands-on experience with quantum computing in a way few other places offer. By having an open platform, they can learn by tinkering at all levels (hardware, software) instead of treating the system as a sealed box.
  • Quantum independence: Strategically, Israel is ensuring it has domestic capability. The phrase “securing Israel’s quantum independence” was used. The center means Israel isn’t solely dependent on cloud access to US or Chinese quantum machines; they have their own that can be expanded and improved locally. It’s also a hub to attract global collaboration – being open, it invites international researchers to come use it, putting Israel on the map in quantum R&D.
  • Showcasing modularity: The very fact that multiple quantum types are co-located and share infrastructure is a strong endorsement of open architecture. The IQCC can add, for instance, a trapped-ion system next, or a different superconducting QPU, relatively straightforwardly. The idea is to create a “plug-and-play” quantum data center.

The IQCC’s establishment was widely noted as a milestone. Forbes called it “a first of its kind globally” for integrating different qubit types with supercomputers. The message is clear: Quantum Machines and Israel built not just a quantum computer, but a quantum computing CENTER that’s modular from day one. This center will likely influence designs elsewhere, demonstrating how to build facilities that can evolve. In essence, IQCC is like a quantum computing version of a cloud data center, where you can slot in new hardware as technology advances. It stands as a proof that open architectures are not only possible but highly beneficial at scale.

QUB and Q-PAC (USA) – A Modular Quantum Reference Architecture

Moving to the United States, an interesting development is the collaboration between three specialist companies – QuantWare, Qblox, and Q-CTRL – to create a standardized, modular quantum system called QUB (Quantum Utility Block). Launched in late 2025, QUB is presented as a “family of pre-validated, full-stack quantum computer reference implementations” that can be deployed on-premises by enterprises or research institutions. In simpler terms, QUB is like a blueprint (with actual hardware kits) for an open quantum computer: it combines QuantWare’s QPUs, Qblox’s control electronics, and Q-CTRL’s software into a package that others can replicate. The idea is to give organizations a third option beyond the two extremes they faced: either buy a closed, expensive system or try to build one from scratch. QUB offers a middle path – a “modular, accessible quantum system” that’s mostly ready-made but still open and customizable.

What makes QUB particularly notable is the support from a U.S. government initiative. The partnership is working with Elevate Quantum, a TechHub in Colorado, to deploy a demonstration system called Q-PAC (Quantum Platform for the Advancement of Commercialization) by 2026. Q-PAC will be a national testbed where researchers and companies can get hands-on with a QUB-based quantum computer. It’s billed as “the fastest path to quantum utility for enterprise and research”, highlighting ease of adoption. Essentially, Q-PAC will serve like a showroom and sandbox for the QUB architecture: an actual running system (with likely tens of qubits) that people can use, and if they like, they can order one for themselves with that reference design.

The QUB reference design currently offers configurations of Small (5 qubit), Medium (17 qubit), and Large (41 qubit) systems, corresponding to QuantWare’s available QPU sizes. These come with the matching Qblox control hardware (scaled appropriately) and Q-CTRL’s Boulder Opal and Fire Opal software pre-integrated for calibration and error reduction. Because the components are pre-validated together in real use (via Tuna-5, via the Naples system, etc.), a new customer can have more confidence it will work “out of the box”. Q-CTRL’s VP of Product, Alex Shih, emphasized that QUB shows “software interoperability and flexibility to unlock performance” – e.g., their tools automatically calibrate QuantWare’s QPUs and suppress errors on Qblox controllers, all without the end-user having to do heavy lifting. It’s a demonstration of how aligning best-in-class pieces yields a whole greater than sum of parts.

From a market perspective, QUB is trying to productize open architecture quantum computers. It’s as if three leading suppliers said “let’s package our Lego blocks into a kit so others can build the same Lego structure easily”. Matt Rijlaarsdam, QuantWare’s CEO, said QUB is about “removing barriers… [so] organizations no longer need to choose between opaque closed-stack systems or risky do-it-yourself builds”. The messaging is very much that of democratizing access – making it feasible for a university, a smaller tech company, or a government lab to own a quantum computer outright, not just access someone else’s cloud, and to do so affordably and with support. That’s a new development: until now, owning a quantum computer was either the realm of big tech companies or nations. QUB hints at a near future where quantum hardware could be an enterprise purchase (like buying a supercomputer or an MRI machine) for those who want on-premises capability. And because it’s modular, the customer has “complete control over their technology future” – they can upgrade or modify parts as needed.

The U.S. government’s involvement (via the Economic Development Administration’s Tech Hubs program) in supporting Q-PAC is also telling. It indicates policymakers see open, modular systems as key to building domestic quantum capacity. Rather than solely funding closed proprietary efforts, they’re investing in a reference platform that many can use and learn from. Q-PAC will likely be used to train quantum engineers and to test integration with supercomputers (Colorado has a lot of HPC and aerospace presence, for example). Jessi Olsen of Elevate Quantum noted this partnership “creates a national resource for training the next generation” and “lowers barriers, reduces risk” via open collaboration.

In essence, QUB and Q-PAC are about standardizing an open quantum “stack” and scaling it out. If successful, we could see multiple copies of QUB systems deployed in labs, much like standard minicomputers in the past. It could also pressure other providers to open up; if a turnkey open system exists, why lock into a closed one? At the very least, it will enrich the ecosystem with more real machines that developers can work on directly. The PC revolution was fueled by “IBM-compatible” reference designs – QUB aims to be a quantum-compatible design that accelerates the whole field.

University of Naples 64-Qubit System (Italy) – Democratizing High-End Quantum Access

Our final case takes us back to where we began: the University of Naples Federico II in Italy, which in 2025 deployed a 64-qubit superconducting quantum computer in a university lab setting – a system larger than many of the commercial machines at that time. What’s remarkable is that this was achieved not by purchasing a turn-key device from IBM or another giant, but by assembling components from the open market. The centerpiece is QuantWare’s 64-qubit “Tenor” QPU, at the time one of the highest qubit-count chips available outside IBM/Google. The Naples team integrated this QPU with a dilution refrigerator, control electronics, and software to create a full system, dubbed Italy’s largest quantum computer.

Professor Francesco Tafuri, who led the effort, highlighted how Quantum Open Architecture made this possible. He noted they needed a processor that was “powerful, but commercially available and ready for integration” – QuantWare’s off-the-shelf Tenor chip fit the bill and “significantly accelerated our timeline”, allowing the team to focus on building the overall system and its applications rather than inventing a 64-qubit chip from scratch. In the past, if a university wanted a cutting-edge quantum processor, they either had to spend years fabricating one (likely ending up with far fewer qubits), or buy an entire closed system. Here, because the QPU could be bought “retail,” they leapfrogged straight to a world-class processor and then invested effort in integration. This approach slashed both time and capital costs compared to alternatives. Instead of tens of millions for a closed system or endless R&D for a homegrown chip, they could get hardware in hand relatively quickly and affordably.

The Naples 64-qubit system illustrates a few key benefits of QOA:

  • Customization and Control: By assembling it themselves, the researchers gained deep understanding of every piece. They can access hardware-level settings, modify the setup, and troubleshoot as needed – things often not possible with commercial black boxes. This “open-box” approach is fantastic for research and education. It’s akin to students building a kit car versus just driving a finished car; the former teaches you how the engine works. Naples students and staff now have experience integrating qubits, calibrating large chip arrays, etc., giving them invaluable skills. It’s building local expertise, not just buying a product.
  • Democratization of capability: A 64-qubit device is serious hardware – on par with some of the best in the world in 2025. That it sits in a university lab in Italy (a country without a big quantum giant company) shows how QOA spreads the tech around. QuantWare has customers in 20+ countries for exactly this reason. As the Quantum Zeitgeist report noted, “countries and institutions that might struggle to develop complete quantum systems internally can now participate… by assembling systems from specialized components”. Naples’ success is a template that other universities or smaller nations could follow – you don’t have to be IBM or have a billion-dollar program to stand up a 50+ qubit machine anymore.
  • Ecosystem validation: QuantWare’s CEO Matt Rijlaarsdam pointed out that Naples operating such a system “beyond most of the systems built by closed architecture players” is a strong validation of the open approach. It’s a proof point to skeptics that yes, a few researchers with the right partnerships can build something as good as what a big corporation can, perhaps even faster. That in turn drives more interest in buying components, creating a positive feedback loop – more demand lowers cost, which enables more systems, and so on.
  • Standard tech, unique integration: The Tenor QPU uses the same transmon qubit tech as IBM/Google, but is “packaged as a standalone component that can be integrated into diverse architectures”. Naples took this component and integrated it their own way (presumably with their choice of fridge and control system). This highlights that differentiation can come from integration quality and use-case focus, not just proprietary hardware. They could, for instance, wire the system to be optimal for certain experiments or try novel cooling techniques – all while using a mainstream chip. QOA allows that freedom to innovate at the system level.

The Naples case also underscores the importance of standardized knowledge sharing. Because many QOA systems share components, lessons learned in one place help others. For example, if Naples figures out how to reduce a particular error on the Tenor chip by adjusting a parameter in the Qblox controller (if they used Qblox, which is likely given Dutch connection), they could share that and everyone with similar setups benefits. Likewise, they can draw on the wider community’s knowledge – perhaps tuning procedures from QuTech’s Tuna-5 or error mitigation from Q-CTRL’s guides.

Finally, the milestone aspect: This is the largest open-architecture quantum computer at a university to date, and one of the largest in Europe outside of maybe national programs. It’s a big psychological shift – seeing a major quantum system in a university lab, not just in a corporate or government lab. It harks back to how in the PC era, powerful computing left elite corporate data centers and landed on university campuses and even dorm rooms. With open architecture, the quantum power is being decentralized and distributed.

In summary, the University of Naples system is a shining example of QOA’s promise: world-class quantum capability, assembled in a modular way, leading to democratization of both access and knowledge. It’s a harbinger that we may soon see many “Naples-like” quantum computers at universities and companies worldwide, each customized for their goals but leveraging common building blocks.

Sovereignty and Regional Quantum Independence

One recurring theme across these case studies is the idea of quantum sovereignty – the desire of nations or regions to develop quantum technology domestically for strategic, economic, and security reasons. Quantum Open Architecture is proving to be a key enabler of that sovereignty.

In the old model (closed, full-stack systems), if a country wanted a cutting-edge quantum computer, they often had to either:

  • Buy one from a foreign provider (if available at all – currently only a few companies sell full systems, and those might not transfer the highest-end tech due to intellectual property or export controls), or
  • Invest massive resources to reinvent the entire stack at home, which is slow and may still leave you behind the state of the art.

Neither is ideal for sovereignty. The closed purchase is like buying a black-box supercomputer – you have it, but you didn’t build internal expertise, and you rely on the vendor for upgrades and fixes. The all-internal approach might ensure control but could take too long to be competitive.

Open architecture offers a sweet spot: a country can mix domestic components with imported ones in a controlled way, or source all components from allies, to build a top-tier system while also cultivating local industry for some parts. It’s very much a “have your cake and eat it” strategy:

  • You leverage foreign excellence where it exists (no need to duplicate, say, an existing good solution for cryogenics or control if a friendly country sells it), which saves effort and gets you good tech.
  • Meanwhile, you focus your domestic efforts on parts you care most about (e.g., a country might pour research into novel qubits or into software, becoming world-class in that area, and then export that in turn).
  • Critically, you avoid single-supplier lock-in. If one supplier country becomes unavailable (trade disputes, etc.), the open approach means you can seek another or build that part internally. There’s resiliency in diversification.
  • It also means if one component has a backdoor or security concern, you can potentially audit or replace it. With open interfaces, a nation might insist on using an in-house secure module for certain tasks, while using commercial for others.
  • Local ecosystem growth: By assembling a quantum computer domestically (even if some parts are imported), you create demand for local high-tech services – cryo maintenance, software integration, etc. This encourages knowledge transfer. For example, Israel’s IQCC, though it uses a Dutch chip and a UK photonic device, is run by Israeli experts and is spawning local startups and projects around it. Similarly, Italy using a Dutch chip still means Italian students learned to run 64 qubits and can now innovate on top of that.

(There’s a subtle sovereignty point here that often gets missed: owning components is not the same as owning capability. Sovereignty increasingly means having in-country (or in‑alliance) integration competence – people and processes that can commission, secure, maintain, and evolve the platform without depending on a single foreign vendor’s black-box services.)

We see concrete strategies in play:

  • The European Union explicitly funds projects like OpenSuperQ+ that tie into the idea of a European-built quantum computer by pooling strengths across countries. They want to ensure Europe has independent capability, so they invest in an ecosystem of suppliers (French cryostat, Dutch qubits, German software, etc.). The result: Europe won’t have to rely on an American or Chinese quantum system; they can assemble their own from European parts. QOA thus aligns with EU’s tech sovereignty goals. The Tuna-5 story was full of references to strengthening the Dutch/European supply chain and aligning with EU Flagship targets.
  • China (though mostly outside the scope of this article) also is effectively doing open architecture but internally: they have dozens of groups specializing in different layers (one state enterprise on cryo, one on control, etc.) to build domestic systems. They rarely buy foreign anyway due to restrictions, but they illustrate how dividing the problem is letting multiple Chinese companies emerge (e.g., OriginQ focusing on superconducting chips, SpinQ on permanent magnet quantum PCs, etc.).
  • Australia has a growing ecosystem (with Q-CTRL, Silicon Quantum Computing, etc.) – they likely will collaborate regionally (maybe with Japan or others) to share components.
  • United States – The U.S. initially had big vertically integrated efforts (IBM, Google). But now, partly spurred by the need to not fall behind and to spread federal money effectively, they are embracing something like QOA. The formation of public-private partnerships and hubs (e.g., the Elevate Quantum Q-PAC, or Chicago Quantum Exchange’s work linking universities and companies) aims to ensure the U.S. isn’t reliant on any one company either. The Tech Hubs funding for Q-PAC shows an understanding that an open, reference design approach can quickly propagate know-how across many states and institutions.

National security and control is another angle. Governments worry that if their critical computations or intellectual property have to be processed on foreign quantum clouds, that’s a risk. With QOA, a government agency or local cloud provider can own and operate a quantum computer on their premises, tailor-made to their security needs. For example, a defense organization could assemble a quantum machine in a secured facility using components they trust (maybe only buying hardware from domestic or allied sources, and using open-source software to avoid backdoors). The open interfaces mean they’re not beholden to one vendor’s updates; they could maintain the system longer and service it themselves or via local integrators.

Furthermore, standards help with sovereignty because if everyone agrees on interfaces, a country can develop an indigenous component that fits into global systems. Suppose a country has a unique technology, like a novel qubit type or a quantum memory – if the control interface is standard, they can plug that into an otherwise mainstream setup and have a differentiated capability without building it all.

However, achieving sovereignty via QOA isn’t automatic. It requires coordination and investment:

  • Training a workforce that can integrate systems (hence, emphasis on education at IQCC and Q-PAC).
  • Possibly subsidizing or protecting local suppliers so they can grow against bigger foreign competitors.
  • Crafting agreements so that allied countries share components (like Netherlands supplying chips to Italy, etc.) creating an allied supply chain. We see early signs: e.g., the Italy system was basically EU collaboration (Dutch chip, Italian lab), Israel’s center involved contributions from the U.S. (NVIDIA) and others, etc..

Overall, Quantum Open Architecture is emerging as the model for countries to achieve quantum prowess on their own terms. It is to quantum what the open-source movement was to software – a way to empower many to participate and innovate, rather than just consuming tech from a superpower or single corporation. By mixing and matching, nations can play to their strengths, develop local industries, and still not miss out on global advances.

One notable quote from QuantWare’s blog stated: “Local ecosystems can leverage [open architecture] to develop quantum systems using domestic resources and expertise, ensuring critical infrastructure remains under national control… rather than relying on a ‘black box’ solution from a single foreign provider.” That encapsulates it: open architecture is not just an engineering choice, but a strategic one.

Market Dynamics

The rise of QOA is shaking up the quantum computing industry structure and bringing new dynamics. There are big opportunities – but also significant challenges – as the ecosystem becomes more modular.

On the market side, we’re moving from a scenario dominated by a few vertically integrated players (IBM, Google, etc.) to a more distributed supply chain model with many entrants. This fragmentation can spur competition:

  • Competition and Innovation: With many companies focusing on each layer, innovation accelerates. Each layer sees a race: e.g., QPU makers compete to increase qubit counts or fidelities; control system makers race to handle more qubits with lower noise; software firms compete on better error correction. This is healthy and drives rapid improvements that a monolithic approach might not match. As noted earlier, specialization means best-in-breed components become available. We’ve already seen cases where specialized startups achieved something faster than an all-in-one could – e.g., Qblox releasing a neat control solution while a big full-stack company was still using older tech internally.
  • Lower Costs and Commercialization: As component providers scale up and sell to multiple customers, their costs per unit tend to drop (economies of scale). This can make quantum systems cheaper overall, broadening the customer base. Indeed, QuantWare being “high-volume” is cited as enabling cost efficiencies to supply many labs. More affordable machines mean more buyers beyond just governments – possibly commercial enterprises starting to invest in on-prem quantum for specific needs (finance firms, pharma companies etc., might want their own quantum machine for IP/security reasons if it’s affordable and maintainable).
  • New Roles (Integrators, foundries, etc.): The emergence of system integrators as a distinct role (Applied Quantum, TreQ, ParTec) is one new dynamic. They turn what used to be an internal R&D task into a service business. We might see the “Dell of quantum” – a company that doesn’t make any component but specializes in assembling them into a quality product and supporting the customer. The quantums integrators will need deep knowledge and will likely form partnerships with component makers (just as PC integrators have preferred component suppliers). They’ll also compete on who can deliver the best performing composite system (tuning the combination to eke out performance).
  • Partnerships and Consortia: QOA encourages partnerships – e.g., QPU maker teams up with control maker for a joint sale (like QuantWare+QM did, or QuantWare+Qblox+Q-CTRL). We’ll likely see more multi-company consortium bids for government projects, each bringing a piece. For instance, a national lab RFP might be answered by a team of four startups rather than one big contractor. This can complicate contracting but can also bring together optimal sets of tech. Industry alliances or associations might form to standardize and promote open architectures (similar to how in classical computing, groups formed around standards like CompactPCI, or how the W3C was formed for web standards).
  • Market Uncertainty and Maturation: With many players, some consolidation is inevitable. Not every startup will survive – some might be acquired by bigger ones or fail if they bet on the wrong approach. Over time, we may see dominant suppliers at each layer (maybe a couple of big QPU makers, a couple of control vendors, etc.), akin to how the classical industry has a handful of CPU makers, a few big GPU makers, etc. We’re not there yet – it’s still a bit wild-west, which is both exciting and risky for customers. One challenge is assessing reliability: if you build a system with Startup X’s hardware and they go under, can someone else support it? The risk is mitigated if standards are in place (maybe another vendor’s part can replace it), but not entirely. Early adopters of QOA systems have to choose partners wisely or ensure they have contingency plans (perhaps buying some spare parts or cross-training to use alternatives).
  • Incumbents’ Reactions: The big full-stack companies are of course not standing still. IBM, Google, IonQ, etc., have huge leads in certain areas (like very high coherence qubits or integrated error correction roadmaps). They might adapt by participating in open ecosystems (IBM could start selling some hardware components or software tools to others – IBM already open-sourced Qiskit and has an open QPU design kit with EQIP). Or they might double-down on vertical integration, arguing that their tight optimization yields better performance. It will be interesting to see if closed and open approaches compete on performance (perhaps initially a fully integrated system might outperform a patchwork one, but as the patchwork improves and maybe surpasses due to collective innovation).

Challenges in the Quantum Modular Ecosystem

Now, on to the challenges in making QOA work smoothly:

Integration Complexity

As any systems engineer will attest, assembling components from multiple sources is not plug-and-play (yet) in quantum. There are myriad technical issues: impedance mismatches between qubit chips and control lines, timing alignment between different electronics, cross-vendor software compatibility, etc.

Integration is a “fine art” and requires significant expertise – this is exactly why specialized integrators are needed and are becoming key players. When something goes wrong, finger-pointing can happen (is a qubit fault due to the chip, the pulse from the controller, or a cable interference?).

Getting all pieces to function as one reliable system is non-trivial – it often requires custom engineering and debugging that doesn’t cleanly fall to any single vendor. The Tuna-5 team explicitly spent a lot of effort on “extensive testing, iterations, and integration” to get a fully functional system. This integration overhead is a cost that must be factored. Over time, as standards improve and companies co-develop more, it should get easier (like building a PC today is fairly easy because all parts are truly standard and tested together extensively, whereas in the 1980s, PC assembly could be finicky).

Performance Tuning

Even if all components work, optimizing the combined system for peak performance is hard. Each junction between subsystems can introduce inefficiencies (latency, noise, etc.)

For example, maybe a QPU can in theory do high-fidelity gates, but only if the control pulses are shaped in a very precise way that the control hardware and software need to support. Or the cryo fridge might have magnetic interference that affects qubit coherence unless you add extra shielding around the chip.

A closed system built by one team might be optimized end-to-end; an open one might at first have these little losses. Latency is a particular concern in hybrid quantum-classical operations: if the control system or integration software isn’t designed well, a feedback loop (like in QAOA or error correction) could be slower, hurting algorithm performance. That’s why projects like IQCC took care to tightly integrate HPC and quantum controllers to minimize latency. Achieving low latency across modular parts may require co-design (e.g., QM working with NVIDIA to ensure their systems talk fast). If modules are naively connected (say a generic ethernet link between classical and quantum parts), it might be too slow for advanced algorithms. So addressing latency often means agreed protocols or physically co-locating certain components (like having classical processors physically near the quantum hardware, as in IQCC’s on-premise cluster).

Standardization (or lack thereof)

We’ve mentioned it as a positive that standards are emerging, but the flip side is until standards stabilize, integration is bespoke. We’re still early – one company’s “quantum API” might not work with another’s without a custom adapter. For instance, control software might need a custom driver for each type of controller or QPU. Everyone is working on their own solutions and then interfacing them. This is akin to early computer days when peripherals had different connectors before USB unified them.

Lack of standards can slow down adoption: a customer might hesitate to buy an open system fearing vendor lock-in of another kind (locked into this combo of parts because they aren’t interchangeable with alternatives easily yet).

The industry is aware and pushing for standards, but it’s a process. The challenge is to balance innovation and standardization – too early standardizing could freeze the tech, too late could hinder compatibility.

Quality Assurance and Responsibility

In a multi-vendor system, if something breaks, who is responsible for fixing it? With one vendor, at least you have “one throat to choke”. In QOA, maybe the qubits have an issue but it’s due to a fridge temperature instability – is that the fridge vendor’s problem or the integrator’s?

Service agreements in such environments need clarity. The emerging model might be that system integrators take on the prime contractor role – they may guarantee the system performance to the end customer and then manage sub-vendors behind the scenes. But integrators are new and relatively small (Applied Quantum, TreQ, etc., are startups themselves), so this could be tricky until they grow. Alternatively, bigger companies might start acting as overall integrators (maybe a company like Atos or HP Enterprise will say “we’ll deliver you a working quantum system, using X, Y, Z components, and we take responsibility as general contractor”). This is evolving, and customers will need clear support plans.

Maintaining Coherence with Scale

This is a technical challenge all quantum systems face, but open ones might find it tougher initially. As you add more qubits and more cables and more control channels, the complexity grows non-linearly. Ensuring that an open architecture approach can scale to, say, hundreds or thousands of qubits is not proven yet. It’s one thing to integrate a 5 or 50 qubit system from modules; it’s another to do a 1000-qubit one. There may be unforeseen system-level effects (like cross-talk across cables, heating issues in the fridge, or simply software bottlenecks coordinating so many qubits). Full-stack companies argue their vertical approach is needed to reach such scale because they can co-design everything. Open advocates would counter that once interfaces are stable, you can scale by parallelizing (e.g., just add more QPU modules, more control racks). The truth will emerge as we push upward.

Investment and Business Sustainability

The modular ecosystem requires a different business mindset. Some investors might worry, “if you’re only making one part, can you capture enough value, or will you be commoditized?” In classical computing, some component makers ended up with low margins (e.g., memory manufacturers in PCs had commodity economics, while Intel/Microsoft reaped outsized profits). In quantum, maybe the QPU becomes commoditized and software yields more value, or vice versa. Each player is betting on being essential.

There’s also duplication of effort risk: multiple startups might each be making a 50-qubit chip – not all will find a market if buyers converge on a favorite. So there’s a shakeout looming. The challenge for the industry is to grow the pie (get more total systems built) so that multiple suppliers can thrive, rather than fight over a tiny market. Initiatives like QUB (which effectively creates more demand by packaging a whole solution) help address this by making it easier for a new customer to say yes to a quantum system.

Cultural Shift

Finally, a softer challenge: convincing more conservative stakeholders to embrace open architectures. Some large organizations might instinctively want a single accountable vendor or fear that assembling a system is too complex. It takes success stories to change minds. But as more case studies like Naples or IQCC show real results, the comfort with open approaches will grow.

The tone of media and reports is already shifting to comparing QOA to the PC revolution (a positive narrative). The challenge is ensuring early open systems deliver real “quantum value” (solving meaningful problems). If they do, that will overcome any hesitancy. If early systems disappoint or only labs play with them without achieving breakthroughs, some might say “see, better to wait for IBM’s next machine.” So the pressure is on these open systems to compete not just in concept, but in performance and usefulness.


In summary, the ecosystem is dynamic, with many moving parts and players. The challenges are real – integration, standards, support, performance – but none appear insurmountable. They are typical of a maturing industry: growing pains as roles get sorted out. The positive side is that the market is expanding and diversifying, which usually indicates a healthy innovation climate. As one article noted, “rather than vertical integration dominating, specialization across hardware, software, and services may emerge as the prevalent model… This shift could accelerate innovation while reducing barriers for new entrants.”

The role of new system integrators will be particularly pivotal. They are the linchpins to solve integration headaches and deliver seamless experiences from disjoint parts. If they succeed, they will usher in an era where using a quantum computer doesn’t require thinking about qubits vs. controllers vs. software – it will just be a reliable machine assembled behind the scenes, much like a PC or smartphone is today. But as of now, we’re still in the assembly and experimentation stage, akin to the hobbyist computer kits of the 1970s or the early clone PCs of the 1980s. The trajectory suggests things will standardize and simplify, yet the next few years will involve lots of engineering elbow grease.

Future Outlook – Towards a Plug-and-Play Quantum World

Peering five to ten years ahead, we venture into informed speculation. If Quantum Open Architecture continues its current trajectory, what could the quantum landscape look like? Here are some possibilities and visions – flagged as speculative, but grounded in analogies to classical tech evolution and early signals in the industry:

Standard QPU Sockets and Modular Upgrades

Just as one can upgrade a PC’s CPU or GPU, we may see standardized “quantum sockets” that allow QPUs to be swapped or upgraded in an existing system. For example, a dilution fridge might come with a standardized slot where a quantum processor module (perhaps a chip on a multi-layer package with integrated filters) can be inserted.

Companies could fabricate new chips with more qubits or better qubits that fit the same slot. Users (or integrators) could then upgrade their quantum computer’s brain without replacing the whole infrastructure.

There are already hints of this: some cryostats use standard chip mounts, and QuantWare’s concept of a QPU marketplace depends on easy swapping. A standardized socket might emerge from consortia (like how ZIF sockets for CPUs became standard). This could also facilitate mixing qubit types – envision a fridge with multiple sockets, one holding a superconducting processor, another holding a photonic interface chip, etc., all connected by a common bus (maybe an optical fiber or superconducting link).

Essentially, we could get a quantum equivalent of a motherboard, where qubit modules plug in alongside classical co-processors or memory units. This would dramatically increase the flexibility and longevity of quantum installations – buy a machine now, upgrade the qubit module in two years when a better one is out, much like upgrading your graphics card.

Quantum App Stores and Firmware Marketplaces

Once you have many users owning or accessing modular quantum systems, there will be demand for easily deployable software solutions to enhance them – much like app stores for phones or software marketplaces for cloud services. We might see a “Quantum App Store” where users can download modules such as:

  • Calibration routines optimized for their specific hardware setup (e.g., a new AI algorithm that tunes qubits 2x faster – perhaps sold by a company like QuantrolOx or an open-source contribution).
  • Error mitigation packages (some already exist, like Q-CTRL’s Fire Opal, but these could be packaged as plug-ins).
  • Optimized compilers for certain algorithms (imagine a package that, for a fee, provides the best transpilation for chemistry problems on your type of QPU).
  • Benchmarking and diagnostics tools – an app that runs a suite of tests on your system and gives health metrics.
  • Algorithm libraries – e.g., a quantum machine learning library that’s pre-calibrated to run efficiently on your hardware, available as an installable module.

In essence, a quantum ecosystem of software and firmware could flourish once there’s a critical mass of deployed QOA systems. This is analogous to how PC software boomed once PCs were widespread, or how smartphone app stores emerged once enough people had iPhones/Androids.

Importantly, this software would be hardware-aware – developers could target a class of hardware (say all QUB Large systems, or all tunable-transmon QPUs) and users could grab those enhancements. For example, Q-CTRL might distribute an “error reduction app” that any Qblox+QuantWare system owner can install to improve performance by X%. We’re already sort of seeing this in enterprise cloud (AWS Marketplace has specialized quantum algorithms one can rent). An app store formalizes and streamlines it.

Quantum Hardware-as-a-Service (QHaaS)

In the classical world, cloud providers offer access to hardware as a service (e.g., GPUs via the cloud). For quantum, we have cloud access already, but the next step could be regional or dedicated hardware services. For instance, a company might rent a turn-key QOA system that is installed on-premise but maintained by the vendor (like leasing a server that is managed). This is partly already happening with players like Oxford Quantum Circuits offering QC through data centers.

But open architecture could accelerate local quantum data centers. Because multiple vendors can supply parts, a local integrator (say an IT service company in a country) could source those parts and offer quantum compute capacity as a local cloud, tailored for domestic customers who want lower latency or higher security than global cloud.

We might see specialized quantum data center operators akin to classical colocation providers, where different quantum racks (from different vendors) sit and clients choose which to use. HPC centers will definitely integrate QOA systems as accelerators – there is already talk of scheduling quantum jobs in supercomputers similarly to GPU jobs.

Within a decade, it could be routine for a supercomputer procurement to include a quantum section, and thanks to QOA, that quantum section could be multi-vendor (e.g., a few annealers, a few gate-based machines, all accessible through the center’s unified scheduler).

Another angle is time-sharing of modular quantum hardware: maybe one can rent just a QPU module and plug it into their own control system for a period. If, say, QuantWare produces a new 200-qubit chip but you don’t want to buy it outright, you might rent it and test it in your fridge for 6 months. Think of this like renting expensive lab equipment. Such flexibility could be facilitated by standard interfaces – if everyone’s hardware can interoperate, the concept of “bring your own QPU” to a facility could happen. IQCC is a step in that direction (users bringing chips to test on QM’s setup).

Global Quantum Networks and Plug-and-Play Ecosystems

Looking farther out, open architecture at the system level could extend to open architecture at the network level. Once quantum computers are modular and numerous, connecting them via quantum networks becomes tantalizing – for distributed quantum processing or secure communication.

We might see the rise of quantum clusters: multiple quantum nodes connected to tackle larger problems collectively. If each node is built on QOA principles, they might each have different strengths (one node has a very low-error but small qubit core, another has many qubits but noisier, etc.). Together they could perform a task using quantum networking protocols.

A speculative scenario: one could purchase quantum computing capacity from a global pool – for example, 100 qubits from a datacenter in Europe linked with 100 qubits from an Asian datacenter via entanglement, orchestrated as one logical machine. Achieving that requires quantum interconnects (a whole field of research on its own, with efforts like quantum repeaters, etc.), but if achieved, it would truly be plug-and-play on a planetary scale – quantum computing resources from anywhere interoperating.

Even without long-distance entanglement, cloud APIs might evolve to let users distribute computations among different quantum processors (classically coordinating them). For example, a hybrid algorithm might call an IonQ machine for one subroutine and a superconducting machine for another within the same workflow – picking each for what it’s best at. This kind of orchestration would make the global quantum ecosystem feel like one big plug-and-play platform. Open software standards (like common circuit description languages) will be key for this interoperability.

Evolving Role of Big Players

The big integrated providers may pivot to become platforms or foundries for others. IBM might open its quantum fabric for others to integrate (IBM could, for instance, allow its larger chips to be bought and used in others’ systems by 2030 if they decide open is beneficial). Companies like Google might provide their quantum control as a service.

Or conversely, they might focus on being the “Intel” that sells the most advanced QPUs while leaving systems integration to others. Already, IBM is collaborating more (like with Bosch on cryo control electronics, etc.). The speculative vision is a less siloed industry, where even the giants contribute to an ecosystem – much like how Microsoft makes software that runs on Dell, HP, Lenovo computers, etc., IBM could supply qubit chips or core IP that many integrators use. This could accelerate the field by spreading cutting-edge tech more widely (rather than only available behind IBM’s cloud).

Quantum “Dev Kits” and Widespread Learning

As QOA matures, we might even see educational quantum computer kits – simplified, small-scale versions of open-architecture systems for training and R&D. For instance, a university lab might buy a 2-qubit transmon kit with a mini-fridge and USB-controlled electronics (some startups already offer 1-qubit desktop setups). Extend that to 5-10 qubit kits affordable by university labs or even companies that just want to train staff. Because components are commoditizing, this could be feasible.

This would produce a generation of engineers familiar with quantum hardware at the nuts-and-bolts level, greatly expanding the talent pool, just as widespread hobbyist computers in the 1970s/80s led to many more programmers and engineers. Such proliferation of knowledge would feed back into faster development of the technology.


In forecasting the future, it’s useful to remember how unpredictably fast classical computing evolved once the modular, open approach took hold. In the late 1970s, personal computers were mostly hobby projects; by the late 1980s, PCs were everywhere and far more powerful, standardized largely by “open architecture” (IBM-compatible clones). We could see an analog with quantum: the late 2020s as the hobbyist/professional niche phase (with QOA systems like kits and small deployments), and by the late 2030s, a more standardized, ubiquitous presence of quantum co-processors integrated into broader computing environments.

That said, quantum physics is not the same as classical bits – there are fundamental limits and challenges that classical computing did not face. We must temper optimism with realism: qubits are fragile, and truly mass deployment depends on breakthroughs in error correction and scalability that are still in development. Quantum Open Architecture doesn’t solve the physics challenges; what it does is create a framework for many hands and minds to contribute to solving those challenges, by lowering entry barriers and encouraging collaboration. This greatly increases the odds of success and perhaps the speed of reaching it. The speculation above assumes steady progress in qubit quality and some early forms of error correction coming to fruition (like reaching a point where 1000 physical qubits can behave like 1 error-corrected logical qubit, and scaling from there).

In a more modest near-term outlook, by the end of this decade we could realistically expect:

  • Multiple countries will have their “national quantum computer” built via QOA consortia (we’ve seen Netherlands and Italy already; likely Germany, France, UK, Japan, Canada, Australia will do similar).
  • A small market of quantum system integrators and vendors will be selling and supporting 50-100 qubit range machines to research labs and maybe some tech-savvy companies.
  • Standards bodies or de-facto standards will be established for key interfaces (perhaps a Quantum Modular Architecture Standard for hardware, and a consensus intermediate representation for software).
  • Some specialization by application might start: e.g., certain QOA systems optimized for quantum chemistry (with qubits and gate sets tuned for that), others for optimization problems – analogous to how some classical servers are optimized for AI with special GPUs. This was hinted with cryostats for different applications and could extend to entire stack optimizations.
  • Hybrid HPC-Quantum installations will become routine in supercomputing centers, with scheduling software seamlessly dispatching tasks to quantum or classical resources as appropriate.
  • If all goes well, a primitive form of error-corrected quantum computing might be demonstrated on one of these modular systems (like using a QOA system to run a small error-correcting code across, say, 20 physical qubits to make one stable logical qubit). Achieving error correction in an open architecture environment would be a litmus test that modular approaches can handle the extreme synchronization and control needed for that feat.

In conclusion, the future likely holds a diverse quantum computing ecosystem that is increasingly open, collaborative, and innovative. We might look back and say QOA was the catalyst that turned quantum computing from a handful of isolated mainframes into a globally networked, vibrant industry – much like how the open architecture of PCs and the internet transformed computing forever. The speculative ideas of quantum app stores or plug-and-play quantum modules underscore a vision where using a quantum computer could one day be as routine as using a cloud server today, with the ability to tailor and upgrade as needed.

It’s an exciting trajectory: Quantum’s “PC moment” is just dawning, and if history is any guide, the next chapters will be full of surprises and rapid progress. The key takeaway is that openness – in architecture, in collaboration, in mindset – is poised to accelerate the quantum revolution, making it a more inclusive and dynamic journey. The story of QOA is still being written, but it increasingly reads like the story of how quantum computing becomes a mainstream, widely empowered technology rather than the province of a few.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap