Table of Contents
Introduction
Quantum computing is rapidly shifting from lab prototypes to cloud-based services. Most organizations will access quantum capabilities “as a service” through cloud platforms, rather than owning a quantum computer on-premise.
This shift reframes the sovereignty debate. The question is no longer simply “Who owns the qubits?” but rather “Who controls the access to those qubits?” When quantum processing is delivered via remote services, national and regional authorities must consider issues like scheduling control, data residency, auditability, and supply-chain trust in the entire service stack.
In short, quantum computing sovereignty hinges on control over the compute access model itself.
From Owning Qubits to Governing Access
For emerging quantum services, control over access can be as important as ownership of the hardware. In a cloud-based quantum model, users send jobs to remote quantum processors via an API, and results are returned over the network. This convenience comes with trade-offs. A nation might not physically own the quantum machine, yet the service provider effectively controls who can use it, when, and under what rules. Key sovereignty questions therefore shift to: Who manages the scheduling and priority of quantum jobs? Who can audit and verify the computations? Where does the code and data reside during processing? And under whose jurisdiction and export laws does the service operate?
Scheduling and priority control
If a country relies on a foreign cloud provider’s quantum service, it cedes some control over when and how its critical workloads run. In a crunch, the provider might prioritize its own domestic users or commercial clients.
By contrast, sovereign access would mean guaranteed or preferential scheduling for national users. This is analogous to HPC usage during emergencies – countries want assurance that vital jobs (e.g. defense simulations or disaster modeling) won’t be de-prioritized due to foreign policies or commercial congestion. Gaining such control often requires either owning the system or having contractual Service Level Agreements (SLAs) reserving capacity and priority for sovereign use.
For example, Europe’s approach to quantum infrastructure emphasizes broad democratic access among European users, achieved by integrating quantum processors into publicly funded supercomputers. Once deployed, these systems are available to European academia, industry, and public sector users via EuroHPC access calls, rather than being controlled by a single vendor’s queue. This model ensures Europe can allocate and schedule quantum resources according to its own strategic priorities, not someone else’s.
Data locality and residency
Sovereignty also demands that sensitive data stays within controlled jurisdictions. When using a cloud quantum service, the code (quantum circuits) and input data are typically sent to the provider’s servers, which might be in another country. This raises legal and security concerns if, for instance, European personal data or IP are processed on U.S. soil under U.S. jurisdiction.
Many governments now insist on “on-soil” solutions – quantum machines or cloud data centers located domestically or in allied territory – so that local data protection laws apply. Germany’s arrangement with IBM is a case in point: IBM installed an IBM Quantum System One in Germany (at Ehningen) in 2021, ensuring all project data remains in-country under German/EU law. This provided German researchers cloud-like access to a cutting-edge 27-qubit processor with full European data locality.
European regulations increasingly favor on-soil hosting of critical cloud services, encouraging providers to localize infrastructure. Indeed, IBM went on to open its first European quantum data center in 2023 with multiple 127-qubit machines in Germany, explicitly to meet EU clients’ data residency and compliance needs.
The goal is that quantum computations happen under domestic oversight, preventing scenarios where foreign authorities might subpoena or block access to data mid-computation.
Auditability and transparency
With quantum-as-a-service, trust hinges on auditability – the ability to verify what was executed and to reproduce results given quantum’s probabilistic nature. Sovereign use cases (e.g. in finance, defense, or critical infrastructure) require transparent logs and controls.
Best practices from early quantum cloud contracts show the need to demand a “paper trail” for every quantum job. This means the provider should record which exact device was used, its calibration state at run-time, how many shots (repeated runs) were taken, the random seed, error mitigation settings, etc. Such metadata allows national regulators or scientists to audit and reproduce the results independently, which is crucial for model validation and for investigating any anomalies. Without these guarantees, a country might be forced to blindly trust a foreign black-box service – an unacceptable risk for high-stakes applications.
Moreover, limiting the provider’s insight into the workloads can itself be a sovereignty concern: the structure of one’s quantum circuits and parameters could reveal strategic information (e.g. the type of problem being solved). Thus, agreements often treat the circuits and run logs as sensitive intellectual property, prohibiting the provider from examining or using them beyond what’s necessary to execute the job.
In short, quantum sovereignty means being able to trust but verify – having independent oversight over the computations performed on foreign hardware.
Export controls and jurisdictional risk
The geopolitical environment adds another layer. Quantum technology is increasingly seen as strategic and dual-use, subject to export controls.
In 2024, the U.S. tightened controls on certain quantum computing technologies and even investor activity. This means a U.S.-based quantum cloud might be legally barred from executing certain algorithms or serving certain users without permission. For example, a European company’s attempt to run an encryption-breaking algorithm on an American quantum service could be blocked or flagged as an export-controlled activity.
These regulatory shifts “flow directly into data-location [and] provider selection” decisions for quantum cloud customers. A sovereign strategy therefore avoids being at the mercy of another country’s export policies. By hosting quantum hardware domestically or with trusted partners, nations ensure that critical workloads (e.g. defense optimizations, cryptanalysis, or other sensitive simulations) won’t be unilaterally cut off due to foreign sanctions or trade restrictions. Even friendly nations may impose sudden restrictions in a crisis, so having a buffer – either an on-premise quantum resource or a guaranteed local alternative – is viewed as prudent.
It’s telling that the recent AUKUS security pact between the US, UK, and Australia explicitly included quantum technologies, relaxing export controls among those allies to form an inner circle of trust. Others, like the EU, interpret this as a sign that they too must secure independent access to quantum tech or risk exclusion.
Supply-chain provenance
Finally, sovereignty extends to the supply chain of the quantum service stack – from the qubit chips and cryostats up through the control software. If a nation’s quantum capability depends entirely on a single foreign supplier (with proprietary firmware, cloud orchestration, etc.), that supplier becomes a potential single point of failure or leverage.
Europe has learned this lesson in classical HPC: reliance on non-European processors and cloud platforms created strategic vulnerabilities. Consequently, European policymakers now stress building a “resilient, sovereign quantum ecosystem” with investments in domestic quantum chips, cryogenic systems, and software, to reduce reliance on U.S. or Chinese vendors. Even if not every piece can be homegrown, they seek supply-chain transparency and diversity.
In cloud contracts, this translates to requiring disclosure of key dependencies and the right to audit or veto certain components. The European Commission’s new Cloud Sovereignty Framework, for instance, evaluates cloud providers on eight objectives including supply chain transparency and technological openness. Applying similar criteria to Quantum-Compute-as-a-Service, a sovereign-minded buyer would ask: Where are the quantum chips fabricated? Which country’s firmware runs the controller? Is the software stack open source or at least escrowed? Are there backdoor risks in any component? By probing these questions and favoring open architectures, countries can avoid hidden backdoors or choke points.
A telling development is the rise of quantum software interoperability initiatives – e.g. OpenQASM 3 and quantum intermediate representations – aiming to make quantum programs portable across different hardware. If successful, these open standards prevent any one vendor’s stack from “locking in” users, thus enhancing sovereignty by preserving optionality.
Lessons from HPC and Cloud Sovereignty
The concerns above echo the debates around HPC (High-Performance Computing) and AI cloud sovereignty in recent years. The playbook emerging there offers valuable lessons for quantum.
Europe in particular has treated supercomputing capability as a strategic asset, not to be wholly outsourced. In 2018, the EU created the EuroHPC Joint Undertaking to pool resources for world-class supercomputers on European soil, after realizing that relying solely on foreign (mainly U.S.) infrastructure could become a strategic liability. Today, EuroHPC has procured multiple petascale and even pre-exascale supercomputers across EU member states, co-funded by the EU and national governments. These machines use a lot of foreign technology (e.g. U.S.-made CPUs/GPUs), but the control plane and governance are European. European scientists get access via EuroHPC’s calls regardless of their country, and usage is governed by European policies.
This hybrid approach – own the infrastructure even if some tech is imported – has boosted Europe’s digital autonomy without isolating it from technological advances. It illustrates when reliance on foreign providers is acceptable: Europe is fine buying Intel or NVIDIA chips for its supercomputers, as long as the systems are operated under EU contracts and the data stays in Europe. What would cross into “strategic exposure” is if Europe abandoned owning machines entirely and just rented HPC time from a foreign public cloud. That path was consciously avoided for critical computing needs. (Notably, the UK’s weather service did partner with Microsoft Azure to host a climate supercomputer in the cloud – the world’s first cloud-based weather HPC – but such moves are exceptions, often accompanied by strict data agreements and driven by unique requirements.)
Instead, the general trend has been toward sovereign cloud frameworks: allowing foreign cloud providers to serve government or critical workloads only under stringent conditions. In October 2025, the European Commission launched a €180M tender for “sovereign cloud” services, setting a benchmark for how sovereignty must be “applied in practice to cloud services”. Up to four providers would be awarded contracts under this framework, which measures things like legal jurisdiction, operational autonomy, open technology, security, and compliance with EU laws. In other words, Europe is saying: we might use Amazon, Microsoft, IBM, etc., but only if they meet our sovereignty criteria (local control, compliance, transparency, etc.)
This is a powerful lesson for quantum compute as well: it’s possible to engage foreign vendors while contractually binding them to sovereignty requirements. For example, a quantum cloud provider could be required to operate via a Europe-based data center with EU citizen staff, guarantee no data leaves the region, undergo code audits, and agree to EU legal jurisdiction in case of disputes. We see early instances of this in the IBM-Fraunhofer deal: IBM’s quantum computer in Germany is operated in accordance with German data protection law, and all user data remains in Germany at all times. This effectively imported IBM’s cutting-edge tech but on European terms. The contractual and architectural safeguards turned a foreign technology into a quasi-domestic service.
Another lesson from HPC/AI is the value of hybrid models and alliance-based co-investment. Very few nations can achieve complete self-sufficiency in supercomputing or AI – the ecosystems are too global and complex. Instead, like-minded countries have joined forces to co-develop or co-procure systems, sharing both the costs and the benefits. The EuroHPC model of co-investment is a prime example: multiple EU nations chip in funds and expertise to host a supercomputer in one location (e.g. Finland for the LUMI supercomputer, Italy for Leonardo, etc.), and in return each country’s scientists get a slice of the machine’s capacity. This spreads both the risk and the reward, ensuring no single country is left without access.
We see a similar impulse in quantum. Europe’s approach has been to coordinate quantum procurement across countries, so that the continent collectively owns a diversified portfolio of quantum devices. Six sites were selected in 2023 to host EuroHPC quantum computers, each with different technology, co-funded by the EU and participating states. For instance, the LUMI-Q consortium’s upcoming quantum system in Ostrava will be co-financed 50/50 by EuroHPC and a consortium of nine countries (Czechia, Finland, Sweden, etc.), truly a pan-European effort. No one country could as easily justify the expense of multiple quantum platforms on its own, but together they can – and everyone in the group gains sovereign access to all those platforms. This “co-procurement with allies” mitigates strategic exposure: even if one vendor or technology falters or one country faces restrictions, the allied pool provides alternatives.
Perhaps the clearest takeaway is the notion of “sovereign optionality”. Rather than striving for total autarky (impossible in quantum, at least today), nations are trying to maximize their options. They invest locally where they have strengths, collaborate with allies to fill gaps, and maintain the freedom to pivot if geopolitics change. In HPC, this meant developing European CPUs (like the upcoming RISC-V chips) to reduce 100% dependence on U.S. suppliers, but also continuing to use the best available tech in the interim. In AI, it means using U.S. cloud GPUs now, while building European AI compute clusters for the future.
By the same token, in quantum we can expect a mix of approaches – using IBM/AWS/Google clouds for early experimentation, but simultaneously funding domestic quantum startups (in hardware and software) and setting up domestic or allied cloud platforms. The goal is to avoid a scenario where a single foreign entity “holds all the keys” to your quantum future. By the time quantum computing matures to truly mission-critical capability, a nation should have multiple avenues to access it – whether through its own quantum computers, shared regional facilities, or at least ironclad contracts with trusted providers. This multi-pronged strategy is already visible in Europe and other regions, as they race to prepare for the coming quantum era without being locked into one supplier or caught empty-handed.
Europe’s Quantum-HPC Sovereignty Strategy
To illustrate these principles in action, consider Europe’s ongoing push to integrate quantum computing into public HPC infrastructure. The EU has explicitly tied quantum computing to its quest for “strategic autonomy,” stating that Europe must not remain “a mere consumer of others’ quantum tech”. Backed by programs like the Quantum Flagship and Digital Europe Programme, Europe is expanding its quantum capacity via coordinated procurement and HPC integration. This approach treats quantum processors as specialized accelerators attached to supercomputers, ensuring European end-users can access them through domestic infrastructure rather than foreign clouds.
A milestone was reached in 2023-2024 when the EuroHPC Joint Undertaking selected six sites across Europe to host EuroHPC quantum computers (often dubbed EuroQCS for Quantum Computing and Simulation). These are real quantum machines (not just simulators) covering a spectrum of technologies. For example, Poland’s EuroQCS-Piast uses a neutral-atom platform, Germany’s and France’s installations involve superconducting and ion-trap systems, Spain’s (with Qilimanjaro tech) focuses on analog/adiabatic quantum computing, and so on. Notably, one of the EuroHPC systems is an adiabatic quantum annealer, enabling Europe to execute quantum annealing routines for optimization problems. Another two devices – supplied by France’s PASQAL – are analogue quantum simulators with over 100 qubits, which are being tightly integrated into Tier-0 supercomputers (the Joliot-Curie in France and JUWELS in Germany) under the HPCQS project. The HPCQS initiative is explicitly aimed at developing a cloud-based federated infrastructure that links these quantum simulators with classical HPC, effectively giving users a seamless hybrid computing environment across borders.
In September 2025, EuroHPC went further by launching a procurement for a new quantum computer in the Netherlands, to be integrated into the Dutch national supercomputer Snellius at Amsterdam Science Park. This system will use a cutting-edge semiconductor spin-qubit processor, chosen for its scalability and ties to Europe’s semiconductor industry. Once operational, it will allow European researchers to run hybrid classical-quantum workflows (like climate modeling or molecular simulations) with the quantum part executed on a local spin-qubit machine rather than a distant cloud.
Crucially, all these EuroHPC-backed quantum systems are owned and operated under European auspices. The LUMI-Q system in Czechia, for instance, will be owned by EuroHPC JU and hosted at IT4Innovations National Supercomputing Center in Ostrava, connected to the Karolina supercomputer. It’s being built by the Finnish company IQM (demonstrating European tech capability) and co-funded by a nine-country consortium. This ensures that expertise and IP also stay partly in Europe – local engineers will install, maintain, and learn from the machine. As EuroHPC stated, the goal is to offer the widest possible variety of quantum technologies to European users and to position Europe at the forefront of this field. By hosting different quantum modalities (superconducting, ion, photonic, neutral atom, annealing) within European HPC centers, they gain collective hands-on experience across the board. This diversity is a hedge against picking the “wrong” technology early, and it also avoids one-vendor dependence. If, say, one of the hosted systems is from an American vendor, others are from European or allied vendors, balancing the dependence. All systems are accessed through EuroHPC’s allocation calls, meaning a researcher in any EU country can request time on any of the quantum machines, much like they do for supercomputers.
Democratized access is a sovereignty boon: European industry and academia won’t be shut out if a foreign provider later decides to restrict accounts or raise prices – they have a guaranteed baseline of quantum capacity provided as a public resource.
The European strategy also emphasizes developing a unified software and control stack for hybrid quantum-classical computing. Integrating quantum accelerators into HPC workflows is non-trivial; it requires new scheduling systems, resource managers, and programming interfaces that can dispatch jobs between classical and quantum processors. To that end, EuroHPC projects are working on a “hybrid software stack” that can manage HPC and QC workloads together. All the hosting entities for the new quantum nodes are collaborating with European standardization bodies to ensure interoperability and avoid fragmentation.
This focus on common standards (for job submission, middleware, etc.) directly feeds into sovereignty: it means a researcher’s code can run on any European quantum backend without having to rewrite it for a proprietary SDK. In effect, Europe is trying to establish a quantum computing environment on its own terms – where the APIs and workflows are designed in Europe (or via open international standards), not imposed unilaterally by, for example, a single U.S. cloud vendor.
If successful, this will make it much easier to swap out or add quantum devices in the future (whether domestic or foreign-supplied), because the surrounding ecosystem is under Europe’s control. We see a concrete example of this vision at Forschungszentrum Jülich (Germany). Jülich’s supercomputing center (JSC) has been a leader in advocating hybrid HPC-quantum computing. In 2025, JSC opted to purchase a D-Wave Advantage™ quantum annealer and install it on-premises, becoming the first HPC center in the world to own a D-Wave system. They connected this 5000+-qubit annealer (dubbed JUPSI) to their JUPITER exascale supercomputer, creating the world’s first integration of an annealing quantum computer with an exascale HPC. JSC’s reasoning was both technical and strategic. By buying the machine outright, Jülich’s researchers gain complete access to all its parameters and can integrate it in new ways – for example, establishing direct, high-speed links between the quantum and classical processors rather than going through a cloud API. In other words, sovereign ownership enabled deeper innovation: the machine is not a distant black box but part of JSC’s unified infrastructure (appropriately, the initiative is called JUNIQ – Jülich UNified Infrastructure for Quantum computing).
It’s worth highlighting that Jülich chose this path instead of relying solely on cloud access. The result is not isolation from the global ecosystem – Jülich still collaborates extensively with D-Wave and others – but rather a balanced sovereignty: Germany gains a reliable, upgradeable quantum resource on its terms, while also remaining plugged into broader advances (for instance, the D-Wave system at Jülich will be upgraded to the next-gen Advantage2 chip when ready, illustrating a close vendor partnership).
This mirrors the earlier point about sovereign optionality: Jülich can use its own annealer for certain workloads and still use IBM’s or others’ quantum services via the cloud for different use cases (indeed, eight Czech universities are similarly using IBM’s cloud quantum computers through a national IBM Quantum hub, as a complementary approach).
The European model, therefore, mixes domestic infrastructure, allied co-development, and selective use of foreign clouds – all governed by an insistence on local control over critical aspects (data, access, IP). As Europe expands programs like the European Quantum Communication Infrastructure (EuroQCI) and potentially a future “EU Quantum Cloud,” we can expect this mosaic of sovereign and semi-sovereign access models to continue evolving. The end-state might be a federated European quantum cloud where multiple countries’ quantum processors (domestic and imported) are accessible through a single European-controlled portal, with guaranteed sovereignty safeguards written into every layer of the stack.
Sovereign Access Models: From Foreign Cloud to Hybrid Control
Not all nations have the same approach or needs, but we can categorize a few models for quantum compute access along a spectrum of sovereignty. Each comes with trade-offs in terms of control, cost, and capability:
Full reliance on foreign cloud
The simplest (and least sovereign) model is to use a public quantum cloud service provided by a foreign tech company under standard terms. For example, any researcher can today run jobs on IBM Quantum or Amazon Braket, accessing cutting-edge devices in the USA or Canada. This route maximizes convenience and early access to the best machines, but at the cost of zero control over the service beyond what the provider grants. The provider can unilaterally change APIs, pricing, priority, or cut off access per its government’s directives. Data will traverse foreign soil, and users must accept whatever audit/security provisions the service offers.
This model is acceptable for basic research and non-critical experimentation, especially in the NISQ era where no nation wants to miss out on learning. In fact, many European researchers have used IBM’s US-based Quantum Computation Center via cloud since 2020, as part of the Fraunhofer partnership. But as quantum moves toward production use, the “pilot, use-at-your-own-risk” posture gives way to concerns about dependency.
Thus, full foreign cloud reliance is seen as a temporary stepping stone – useful for gaining experience, but a strategic exposure if continued indefinitely for mission-critical workloads.
Domestic control plane over foreign hardware
A more sovereign variant is when a country retains control over the user-facing layer and orchestration, even if the qubits are foreign. In this model, domestic institutions operate the scheduling system, job queue, and perhaps even host certain aspects of the service locally, but connect to an external quantum backend. This could be done for example via a secure gateway to a foreign provider’s API, where the domestic side handles identity management, access approvals, and perhaps caches results – ensuring, for instance, that all user data is encrypted in transit and decrypted only on local servers.
Another approach is a federated cloud: multiple countries create a joint platform that can broker quantum jobs to various hardware (some of which might be abroad) while abstracting the process under a sovereign interface. The HPCQS project in Europe hints at this, aiming to coordinate a cloud-based infrastructure that ties together quantum resources in France and Germany for Europe-wide use.
In practice, this model can also mean leveraging open-source frameworks to submit jobs to foreign quantum systems so that you’re not dependent on a proprietary web portal. The benefit here is partial autonomy – you manage your users and workflows, so you can enforce local policies (who can run what, how results are stored, etc.), even though the physics is happening on someone else’s machine. It also provides an easy off-ramp: if vendor A’s hardware becomes unavailable, you could swap in vendor B’s, since your users are interfacing with your domestic control plane (not vendor A’s unique tools).
The challenge is that it requires technical sophistication to set up and may not grant 100% access to hardware features. However, projects like Q-AIM (Quantum Access and Integration Middleware) are emerging to offer vendor-independent, open-source solutions for managing quantum workflows across different backends. By adopting such tools, countries can create a unified quantum job manager that treats external quantum processors as plug-and-play resources – thereby minimizing lock-in and centralizing control on home turf.
Domestic hosting of foreign technology
This model is exemplified by the IBM-Fraunhofer partnership in Germany and similarly by IBM’s deployment in Japan and Canada. Here, the quantum hardware and its core software are provided by an external vendor, but the machine is physically installed in the sovereign territory and operated in collaboration with a local institution.
Germany’s IBM Quantum System One, launched in 2021, was the first of its kind in Europe. It gave German scientists hands-on experience with a 27-qubit processor on German soil, effectively “importing” capability in a sovereign-friendly way. Local researchers got full access and could learn by doing, while all data remained under German jurisdiction.
This arrangement often involves training and knowledge transfer – IBM provided technical support, but Fraunhofer personnel were deeply involved in running experiments. The sovereignty advantage is significant: the host country can enforce its own data protection and security standards at the facility, decide who is allowed physical or remote access, and gain familiarity with the technology that can seed domestic innovation. Indeed, Fraunhofer has said that early access to IBM’s machine “helped Germany then develop its own prototypes”, by building local expertise.
The limitation, however, is continued dependency on the vendor for new generations of hardware and for proprietary software updates. Essentially, one is renting a foreign airplane but keeping it in one’s own hangar with one’s own pilots. If the vendor withdraws or falls under export ban, the operation might not be maintainable. Nonetheless, as an intermediate model, this has proven highly effective. It’s no coincidence that IBM opened a Quantum Data Center in Europe – they see governments are more comfortable when the hardware is regionally hosted and governed.
Similarly, Canada’s D-Wave has installed systems in European labs (like Jülich) and Japan’s companies are partnering to set up hardware locally. We can expect more “bring the mountain to Mohammed” arrangements: countries will invite top vendors to install machines in-country, possibly sweetened by co-investment or guaranteed usage contracts.
This model strikes a balance: foreign tech, but domestic control over location and usage.
Domestic hardware with foreign support/software
This is a twist on the above, where the hardware might even be built domestically or by a domestic-led consortium, but certain software components or expertise come from abroad. For instance, a country might develop a quantum computer using local companies or an open design, yet rely on a foreign software stack (say, a well-established quantum SDK or cloud interface) to program and operate it.
Alternatively, the country might buy core hardware components from abroad (like qubit chips or microwave electronics) but integrate them locally. Many current quantum startups operate this way: they stand on global shoulders (using, for example, American-made AWG instruments or French cryostats) but assemble and program systems to serve their home market.
The sovereignty here lies in owning the assembled system and source code, even if not every part is domestic. For example, Finland’s IQM provides superconducting quantum processors that will be used in the LUMI-Q system in Czechia. While IQM is European (Finnish-German), they undoubtedly use some components from the US or elsewhere. The key point is that Europe will own the final machine and have full control over its operation. If bugs in the control software appear, the EuroHPC teams can work on fixes in collaboration with the vendor, or even develop independent software if needed.
Compare that to a cloud-only scenario where users have no visibility into the control software at all. This model maximizes learning-by-building: domestic teams gain the know-how to possibly replace any foreign piece later with a homegrown solution.
It is a longer-term sovereignty play – investing in the capability to build and run quantum computers locally, even if some modules are initially imported. Over time, the reliance on foreign pieces can be swapped out as domestic industries mature (much like Japan achieved a fully homegrown 64-qubit machine by leveraging decades of electronics expertise).
In essence, this model treats foreign tech as scaffolding: useful support to construct one’s own house, but ultimately removable.
Allied co-development or joint procurement
As discussed, teaming up with allies is a force multiplier for sovereignty. In quantum, we see this in multi-country projects and even bilateral agreements. The EuroQCS/EuroHPC program is one example – countries sharing costs and results across a network of quantum installations. Another is the AUKUS pact between the US, UK, and Australia which explicitly includes joint efforts on quantum technologies, effectively creating a mini “trusted cloud” among those nations for sensitive quantum work. Co-development can also occur in research consortia (e.g. French, German, Italian labs pooling expertise to build a specific quantum prototype).
The sovereignty benefit is that no single country is left at the mercy of an external entity; access is guaranteed by treaty or contract among the allies. For smaller nations especially, banding together may be the only viable way to afford cutting-edge quantum systems without going to a foreign vendor. Of course, this requires high levels of trust and alignment of interests among partners. But within the EU framework, this is quite natural – EuroHPC itself is predicated on mutual trust and legal structures that ensure fair access for all contributors. A user in Belgium can utilize a quantum computer in Spain because both are in the EU program, just as a user in New York can use a Microsoft data center in Virginia.
When done right, allied procurement can yield sovereign capabilities that no one country could achieve alone. Each ally gets an equity stake in the capability. It’s also a diplomatic signal: technologies developed in such partnerships are less likely to be withheld during political disagreements, since multiple countries have ownership. We might foresee NATO or other alliances extending this concept – perhaps a NATO Quantum Research Center where member states share access.
In any case, for regions like Europe, co-sovereignty is a core part of tech sovereignty.
Full national sovereignty (complete self-reliance)
This final model is aspirational and currently rare. It means a country can design, build, and operate quantum computers entirely on its own, with domestic components and knowledge at every level. As noted, no major power is fully there yet (even the U.S. relies on some European components, and China on some Western tech until recently). Japan’s achievement of a 64-qubit fully homegrown system in 2025 stands out as a notable exception. But even Japan leveraged its existing electronics industry and still benefits from international research.
For most nations, achieving 100% quantum self-sufficiency is an immense challenge and arguably an unnecessary duplication of effort if allies exist. However, the pursuit of high degrees of self-reliance can drive innovation (e.g. Europe investing in its own quantum chip fabs or control electronics).
Realistically, “full sovereignty” is often more about optics and bargaining power – demonstrating you could go it alone so that partners take your concerns seriously. In practice, a country doesn’t need to reinvent every wheel; it just needs enough indigenous capability to maintain freedom of action. That might mean having at least one domestic quantum platform (even if not the best in the world) for critical tasks, and fostering local companies so that foreign suppliers remain competitive and open with you.
Complete quantum sovereignty remains a long-term goal for the largest economies (US, China, maybe EU as a bloc), but for most others, it’s about mixing the models above to get as close as feasible to independence where it counts.
It’s clear there’s no one-size-fits-all solution. Each model involves trade-offs between sovereignty, speed, and cost. Countries may also employ multiple models simultaneously – for example, a nation might use foreign cloud services for research and education (Model 1), host a foreign vendor’s machine for industry use (Model 3), and collaborate in an allied quantum center for government use (Model 5). The key is to be deliberate about these choices: identify where sovereignty is non-negotiable (e.g. defense applications) and secure those, while being flexible elsewhere. This layered approach is exactly how classical computing sovereignty has played out, and quantum is following suit.
Ensuring “Sovereign Access” – Best Practices and Recommendations
As governments and enterprises embark on quantum computing adoption, they should incorporate sovereign access guarantees into their strategies and contracts. Below are best-practice recommendations to maximize control and minimize strategic risk when leveraging quantum compute as a service:
Prioritize Data Residency and Jurisdiction
Insist that all quantum computations involving sensitive data occur under your jurisdiction. This can mean choosing a provider that offers a local/regional hosting option or negotiating for one. For example, require that “all project and user data remain on servers located in [your country/region] at all times”. If using a multinational cloud, opt for regions that meet your legal criteria (e.g. an EU user choosing an EU-based quantum data center, such as IBM’s in Germany, to stay under EU law). Include clauses that the provider will not move or replicate your data outside agreed locations without consent.
This protects you under home data protection laws and prevents foreign government access to your data. Essentially, make data locality a hard requirement for any quantum cloud engagement.
Demand Operational Transparency and Audit Logs
A sovereign user should be able to audit the quantum service as thoroughly as a classical one. In contracts, specify the provider must supply job-level metadata for each run – including timestamp, device ID, qubit calibration data at runtime, number of shots, random seeds, error mitigation settings, etc. This information allows your experts to reproduce and verify results independently. It is also crucial for compliance in regulated sectors (finance, healthcare) where you must document how an outcome was obtained. Do not accept “black box” operation. If a provider is unwilling to share detailed logs (perhaps citing IP concerns), consider running those jobs on an alternative platform where you have more visibility.
Audit rights should also cover the provider’s security measures – you should be allowed (perhaps via a trusted third party) to audit that the quantum service meets promised standards (e.g. isolation between customer workloads, proper encryption of data at rest and in transit, and no unauthorized access). Essentially, build in the right to “trust, but verify”.
Maintain Complete Data Sovereignty over Code and Results
Treat your quantum circuits, algorithms, and results as crown jewels. Ensure contracts state that all code you run and the outputs produced are your proprietary data, not to be used by the provider for any purpose other than executing your jobs. Many cloud terms allow providers to use customer data to improve services or for analytics – carve out an exception for your quantum workloads. Specifically, prohibit the provider from examining or deriving insights from your circuits or logs (beyond automated error correction processes). If the platform internally re-translates your code (e.g. compiles to another form for their hardware), it should not retain those translations beyond the job’s completion.
Also set retention policies: you might require that all your job data be deleted from the provider’s systems after a certain time (unless you choose to store it). If you are especially concerned about confidentiality (e.g. running sensitive defense algorithms), consider running jobs in encrypted form (some research is exploring verified blind quantum computing, though it’s early). At minimum, include a clause that you can purge all your data from the service upon contract termination or request – and that any backups are also destroyed.
Clearly Delineate Allowed vs. Disallowed Jurisdictions
If using a multi-region quantum cloud, explicitly whitelist or blacklist locations. For instance, “Jobs may only execute on quantum processors physically located in EU or Five Eyes countries” could be a policy if you trust those jurisdictions. Conversely, “Provider shall not route any of our workloads to data centers in Country X”. This protects you from hidden reroutes; even if the provider has a cluster in an unapproved country, they must restrict your jobs to approved ones.
Likewise, include notification/consent rights: if the provider ever needs to move your quantum job to a different site (say, due to maintenance), they must obtain approval if that site is in a jurisdiction you hadn’t agreed to. This ties into compliance with export controls – you might forbid any handling of your jobs in countries under certain sanctions or those lacking adequate IP protection.
Secure Capacity and Priority with SLAs
Don’t rely on best-effort service if your use case is mission-critical. Push for Service Level Agreements that guarantee aspects that matter to you – not just uptime of the API, but availability of the quantum hardware when needed. For example, negotiate a reserved time window each day or a guaranteed number of shots per hour that will be available to you. If latency is an issue (e.g. you need rapid turnarounds), include an SLA on job turnaround time or queue wait time. At the very least, ensure there are penalties or credits if the service is unavailable beyond a certain threshold. Be aware that many quantum services today are labeled “experimental” or “preview”, which providers use to avoid strict warranties. While this is fine for early R&D, once your usage approaches production importance, you should renegotiate terms out of “experimental” mode. If a provider isn’t willing to offer any meaningful SLA (common in early days), mitigate by having a secondary provider as backup.
In essence, plan for redundancy: have more than one source of quantum access so that a failure or delay in one doesn’t debilitate your operations. This could mean maintaining a small in-house quantum system for urgent tasks while using cloud for large experiments.
Include Escrow and Exit Provisions
Mitigate the risk of vendor instability or lock-in by arranging escrow agreements for critical software or even hardware IP. For instance, if you heavily rely on a provider’s proprietary SDK to implement your algorithms, consider a source code escrow: the code is held by a neutral party and can be released to you if the vendor goes out of business or discontinues the service. This would allow you (in theory) to continue running your applications on some alternative platform.
In contracts, also specify what happens upon termination: ensure you have the right (and a workable mechanism) to retrieve all your data, including circuit libraries, performance metrics, etc., in a standard format. If you have custom calibration or configuration data on a quantum machine, that should be handed over too. In some cases, governments have negotiated the right to physically purchase the hardware at a depreciated cost if the service contract is ending – effectively an “off-ramp” to keep the machine running locally. While not always feasible, the idea is to avoid being left stranded.
Portability is the watchword: structure arrangements so you can port your workloads elsewhere with minimal disruption. One practical step is to develop your quantum software in a hardware-agnostic way. Use open-source frameworks (like Qiskit, Cirq, or Braket’s open SDK) or intermediate languages (like OpenQASM or QIR) that can target multiple backends, rather than a highly proprietary language. Europe’s push for a standard hybrid stack and collaboration with standard bodies is precisely to ensure such portability. By writing your quantum applications in a portable form, you gain leverage – if Provider A falters, you can switch to Provider B or to an on-premise device with less pain. In summary, always have an exit strategy when entering a quantum cloud relationship.
Mandate Crypto-Agility and Security Practices
Quantum services straddle classical networks (for job submission and result retrieval), so all the usual cloud security concerns apply – plus some unique twists. Ensure the provider follows best practices like end-to-end encryption for data in transit and at rest, strong authentication, and isolation between customers (no co-mingling of jobs that could lead to cross-talk even at the quantum circuit level).
Beyond that, a forward-looking clause is to require quantum-safe encryption be implemented for all communications. Given the rise of quantum computers, providers themselves should transition to post-quantum cryptography (PQC) for securing channels and stored data. Your contract can require that the supplier maintain a clear plan for migrating their control plane, APIs, and storage to quantum-resistant encryption within a certain timeframe. This ensures that as the years go by, your sensitive data isn’t sitting encrypted with algorithms that could be broken by advanced quantum techniques.
Also consider the interplay of your own cryptographic posture: if you’re sending encrypted data to the quantum cloud (say, encrypted problem instances), discuss key management and whether the provider ever needs decryption capability (ideally not).
Security audits and compliance certifications (ISO 27001, SOC 2, etc.) are as important for quantum cloud providers as for any cloud – insist on seeing those, and possibly add the right to conduct penetration testing of the cloud interface (or review their reports).
The bottom line is to not treat a quantum cloud as a mystical box outside normal security governance. It should be held to the same or higher standard, especially as it might be handling extremely valuable computational tasks.
Leverage Alliance and Public Procurement Tools
If you are a government or part of a consortium, use collective bargaining to embed sovereignty in contracts. For example, through the EU or similar bodies, set common requirements that any cloud quantum provider must meet to sell into the public sector. This is already happening under the EU Cloud Rulebook approach. By banding together (as the EU did with its Cloud Sovereignty tender), customers can nudge providers to offer sovereign versions of their services. We see this with major cloud companies launching “sovereign cloud” offerings in Europe that are operated by a ring-fenced EU entity and meet strict data localization rules.
A similar concept could emerge for quantum: e.g. an “EU Quantum Cloud” where international providers deploy hardware but under an EU-based management framework and with EU partners operating it. As a national stakeholder, advocate for and participate in these joint efforts. They spread costs and also reduce the risk that any one country gets a suboptimal deal. Allied co-procurement can also extend to hardware: consider joining forces to buy a quantum system together (as some European nations have with LUMI-Q). Each partner can then have guaranteed access rights written into the agreement.
This approach not only improves bargaining power with vendors (bigger order, more influence), but also ensures no single point of failure in terms of political risk.
Foster Domestic Skills and Backup Capacity
Sovereign access isn’t just about contracts and hardware – it’s also about people and knowledge. Invest in training a local workforce that can operate, maintain, and even design quantum technologies. The more you can rely on your own experts, the less you are beholden to a vendor’s schedule or tech support. For instance, if you have in-house quantum engineers who understand error correction and calibrations, they could potentially step in to fix issues on a machine (or at least understand them) without waiting in a foreign support queue. This was a benefit Germany reaped from having IBM’s system on-site: it catalyzed a “quantum-ready” workforce that then started building indigenous solutions.
Also consider maintaining some level of on-premise quantum capability, even if it’s modest (like a few qubits or a quantum simulator). This can serve as a testbed and a safety net. For example, if geopolitical events abruptly cut off cloud access, having a local prototype (or even high-performance classical simulators of quantum circuits) means your critical R&D doesn’t halt completely. Many universities are now acquiring small quantum rigs for this reason – not to solve large problems, but to ensure students and researchers can continue hands-on work regardless of external factors. Governments could fund a “national quantum sandbox” that is open to domestic users and not dependent on any foreign entity’s uptime.
Such efforts complement use of large cloud devices with sovereign resilience at a smaller scale.
Embrace Open Standards and Interoperability
Finally, as a strategic principle, push for open architectures in the quantum ecosystem. The more the community adopts common standards for things like circuit definitions (OpenQASM), cloud APIs (there are initiatives for a unified API), and even hardware interfaces, the less any one provider can impose terms. As I noted previously, “Quantum Open Architecture ethos [reduces] the power of any single supplier to hold others hostage”. By contributing to and using open-source tools for quantum development (e.g. AWS’s Braket SDK, IBM’s Qiskit, Google’s Cirq – all open source), you ensure that your investment in software is portable. If you write all your code in a closed framework tied to one vendor, that’s a lock-in risk. Instead, support initiatives like QIR Alliance or middleware like Qiskit-on-AWS which blur provider lines. Some governments might even mandate that publicly funded quantum research produce open-source code or adhere to open standards, to build a public commons of quantum software that anyone’s hardware can run.
This not only accelerates innovation but also underpins sovereignty: it prevents a scenario where a nation’s entire quantum program is built on, say, a single company’s proprietary language that could disappear. By standardizing for flexibility, nations keep their options open. Interoperability is sovereignty’s friend – it enables the plug-and-play flexibility to switch suppliers or mix and match as needed. We see this philosophy in Europe’s collaborative projects and in bodies like the European Quantum Industry Consortium (QuIC), which advocate for common standards and roadmaps so that European companies can collectively ensure supply chain resilience.
In summary: make “no vendor is irreplaceable” your mantra, and bake that into the technical choices you make from day one.
Conclusion
Sovereignty in quantum computing will be an ongoing balancing act. No country can or should cut itself off from the global quantum ecosystem – progress is moving too fast and is too collaborative for isolationism to succeed. However, as quantum moves from science experiment to strategic infrastructure, nations are wise to secure leverages and safeguards over their access to it.
By learning from cloud and HPC sovereignty efforts, governments can negotiate quantum access on favorable terms: keeping data local and secure, ensuring fair access and priority, and retaining the freedom to pivot if needed. Europe’s early actions – integrating quantum into HPC centers, enforcing data sovereignty, and co-investing across borders – provide a valuable case study in how to gain quantum capability without sacrificing self-determination. The rise of quantum-HPC national infrastructure is not just about technology – it’s about governance. It shifts the focus from qubits in a lab to qubits as a service, with all the contractual and political complexities that entails.
Those who proactively address these now, with clear strategies and robust agreements, will find themselves masters of their quantum destiny, able to harness tomorrow’s quantum breakthroughs with confidence and on their own terms, rather than renters at someone else’s whim.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.