Post-QuantumQ-Day

Capability 2.2: Magic State Production & Injection (Non-Clifford Gates)

This piece is part of an eight‑article series mapping the capabilities needed to reach a cryptanalytically relevant quantum computer (CRQC). For definitions, interdependencies, and the Q‑Day roadmap, begin with the overview: The Path to CRQC – A Capability‑Driven Method for Predicting Q‑Day.

(Updated in Sep 2025)

(Note: This is a living document. I update it as credible results, vendor roadmaps, or standards shift. Figures and timelines may lag new announcements; no warranties are given; always validate key assumptions against primary sources and your own risk posture.)

Introduction

Magic states are an essential “extra ingredient” for universal quantum computing, often metaphorically likened to a magic catalyst enabling otherwise impossible operations. Quantum algorithms require not only robust qubits and error correction, but also a way to perform non-Clifford gates – operations outside the easy Clifford group. These non-Clifford gates (like the T gate or controlled-controlled-Z) are the key to achieving universal quantum computation, yet they are notoriously difficult to implement fault-tolerantly. The leading strategy is to generate special ancillary qubits known as magic states, which, when consumed via state injection, produce the effect of a non-Clifford logical gate on the data qubits. In a sense, magic states provide the “quantum fuel” that powers the hardest part of the computation. However, producing these states in sufficient quantity and quality is one of the grand challenges on the road to a Cryptographically Relevant Quantum Computer (CRQC). This capability – high-fidelity magic state production and injection at scale – is often cited as the primary bottleneck (or dominant cost center) for running algorithms like Shor’s factoring on a large quantum computer. In this article, we dive deep into what magic state distillation and injection entail, why they are absolutely critical for a CRQC, the current state of the art and recent breakthroughs, and what milestones to watch for as this capability develops.

What Are Magic States and How Do They Enable Non-Clifford Gates?

In error-corrected quantum computers (for example, those based on the surface code), certain operations are “easy” – specifically, the Clifford group gates (Hadamard, CNOT, phase S, Pauli rotations). These can typically be done transversally or with low overhead on logical qubits. By contrast, non-Clifford gates such as the T gate (a $$\pi/8$$ rotation) or a Toffoli (CCX) gate cannot be implemented directly on a logical qubit without introducing unmanageable errors. A famous theorem (the Gottesman-Knill theorem) notes that circuits using only Cliffords are efficiently simulable classically, so to get quantum advantage one must include a non-Clifford element. The question is how to add non-Clifford gates into a fault-tolerant circuit without breaking the error correction. The standard solution, proposed in the mid-2000s by Knill and by Bravyi and Kitaev, is magic state distillation. The idea is to carefully prepare a special ancilla state that is not a stabilizer (hence “magic”), and then inject it into the circuit to perform the non-Clifford operation.

A magic state is essentially a quantum state that, if available on demand, lets you execute a non-Clifford gate on a logical qubit via a small gadget circuit. For example, a single-qubit magic state often denoted $$|T\rangle$$ (an eigenstate of the T gate) can be consumed to enact a T gate on a data qubit. Likewise, a three-qubit magic state (like the $$|CCZ\rangle$$ state) can enable a CCZ gate when injected. These states are called “magic” because you cannot produce them using only Clifford operations – they lie outside the set of states reachable by the fault-tolerant Clifford gates. Instead, one must prepare many noisy copies of a magic state using whatever physical operations are available, then apply a distillation protocol to purify them. Magic state distillation is effectively an error-correcting code at the state level: by consuming (e.g. measuring) several noisy ancillas, one can project a subset into a higher-fidelity magic state. Repeating this in multiple rounds can exponentially suppress the error rate of the magic state at the cost of an exponential overhead in number of raw states consumed. The classic example is the Bravyi-Kitaev 5-to-1 protocol (circa 2004-2005), which showed that given enough noisy T states and spare qubits, one can boost their fidelity to arbitrarily high levels. This laid the foundation for universal fault-tolerant quantum computing: Clifford operations handle the easy part, and distilled magic states provide the hard gates.

State injection is the process of using a magic state to actually perform the gate. In practice, the quantum computer would route a prepared magic ancilla into an interaction with the target logical qubit (usually via a small Clifford network of gates), then measure certain qubits. If the magic state was prepared correctly, this procedure has the effect of the desired non-Clifford gate on the logical state. (There is typically a need for a conditional correction – a known Pauli might be applied depending on the measurement outcome, a manageable form of “feed-forward” adjustment.) The magic ancilla is consumed (measured) in the process. To implement millions or billions of T gates, the machine must continually produce and inject fresh magic states, all while error correction runs in the background. This is why we speak of magic state factories – dedicated regions of a quantum computer that would do nothing but churn out high-fidelity magic states for the main algorithm to use.

Why This Capability Is Critical for CRQC (Role in Universal Quantum Computing)

Magic state production & injection is often described as the linchpin of universal quantum computing. The reason is simple: without an efficient way to perform non-Clifford gates, a quantum computer cannot run the full range of algorithms that outperform classical computers. From a cryptographic perspective, algorithms like Shor’s factoring algorithm are heavy users of T gates (or equivalently Toffoli gates, which decompose to T gates and Cliffords). In fact, the vast majority of the quantum operations needed to break RSA or ECC are T/Toffoli-type gates. Estimates for factoring a 2048-bit RSA number involve on the order of a few billion Toffoli gates. Gidney and Ekerå’s 2019 resource analysis famously brought this number down to the “billions” range (from earlier trillions) by algorithmic optimizations, but it’s still enormous. Crucially, each of those Toffoli gates would require a magic state (since Toffoli is a non-Clifford operation). That means a CRQC must supply billions of high-fidelity magic states in total, or equivalently be able to output on the order of millions of magic states per second to keep up with the algorithm. If magic state generation is too slow or error-prone, the entire computation bogs down or fails.

This is why magic state factories are expected to dominate the design of a large-scale quantum computer. Studies have found that a large fraction of all physical qubits in a fault-tolerant machine might be devoted just to magic state production. For instance, older analyses pointed out that hundreds of thousands of physical qubits could be needed solely for distilling T states to break RSA-2048. In a plausible CRQC architecture for factoring RSA, one might envision a half-dozen or more magic state “factory” modules working in parallel, each continuously outputting a purified non-Clifford state (e.g. a CCZ or T state) every few clock cycles. Those states get immediately consumed by the algorithm’s circuits. If the supply rate is insufficient, the algorithm has to pause and wait for magic states, drastically increasing runtime. If the state error rates are too high, then the injected gates will introduce faults faster than error correction can handle, undermining the computation. In short, a CRQC is impossible without a fast and reliable non-Clifford gate mechanism, and today that essentially means magic state distillation and injection.

The stakes are illustrated by the impact on runtime: Suppose breaking RSA requires on the order of $$10^{12}$$ logical operations in total, and most of those are T gates. If our machine can perform, say, 10 million logical ops per second, it could finish the job in a couple of days – but only if it can supply T states at ~10 million per second as well. Any shortfall in the non-Clifford throughput directly lengthens the time to solution or forces a larger machine to compensate. Thus, Magic State Production & Injection isn’t just another capability; it’s often the rate-limiting step that will determine whether a given quantum computer design can achieve cryptographically relevant tasks in a reasonable time. Industry experts frequently refer to magic-state handling as the primary bottleneck for scaling to useful quantum computers. Overcoming this bottleneck is absolutely critical to reach Q-Day (the day a quantum computer can crack present-day encryption).

Status and Recent Progress in Magic State Distillation

For many years, magic state distillation was a theoretical concept demonstrated only on paper. Researchers refined protocols (e.g. Bravyi-Haah 2012 schemes that improved on the original Bravyi-Kitaev method) and proposed elaborate factory architectures, but experimental quantum devices were too small and error-prone to implement any of it. That began to change very recently (2023-2025), marking an exciting inflection point for this capability. We are now seeing the first proof-of-concept demonstrations that magic states can be produced and used in actual hardware, albeit at small scale. Here are some notable milestones and research highlights:

First logical magic-state injections (2023-2024)

By late 2023, teams at IBM and Google independently demonstrated the basic magic state injection procedure on small quantum codes. IBM researchers prepared and injected a $$|T\rangle$$ magic state in a distance-3 surface code on a 17-qubit superconducting device (“IBM Fez”), achieving a logical T-gate operation. Similarly, Google’s Quantum AI team reported injecting magic states into a distance-3 color code on their superconducting platform. These experiments were essentially early “break-even” tests – showing that one can take an error-corrected qubit and perform a non-Clifford gate on it, although with only modest fidelity. In the Google case, they used post-selection to filter out bad injections and achieved injected state fidelities above 99% among the kept runs (discarding about 25% of attempts). The IBM result similarly showed a logical T preparation just over the threshold fidelity needed for distillation. While distance-3 is the smallest quantum code with any error suppression, clearing the magic state distillation threshold in these experiments was a key proof-of-concept. It indicated that if you had this modest logical T state, you could in principle distill it further – the first step of the pipeline.

Magic state fidelity beyond distillation threshold (2023)

In parallel, a group at USTC (University of Science & Technology of China) achieved an impressive milestone on their 66-qubit superconducting processor Zuchongzhi 2.1. In an experiment reported in late 2023, they prepared three distance-3 logical magic states with fidelities ≈0.88-0.91 – notably above the accepted distillation thresholds (around 0.83 for T-type, 0.859 for H-type magic states). In other words, these logical T and H states were high enough quality that feeding them into a standard distillation circuit would yield even better magic states. Prof. Xiao-Bo Zhu, who led the effort, called it “a critical milestone” toward fault-tolerance. It was the strongest validation to date that small surface-code patches can actually generate the “raw” magic resources needed for the next level. This experiment, published in Physical Review Letters, demonstrated non-destructive preparation of magic states and showed that superconducting qubit systems (beyond just IBM’s) are closing in on the necessary fidelity for magic-state factories.

Full fault-tolerant gate set demonstrated (Quantinuum, 2025)

In mid-2025, Quantinuum (the company born from Honeywell’s trapped-ion division) announced that they had implemented all components of a fault-tolerant universal gate set on their ion-trap processors. This included generating magic states and performing a high-fidelity logical T gate on an encoded qubit – effectively showing end-to-end that a non-Clifford logical operation is achievable with error rates better than any physical gate. Their approach was especially noteworthy: they tried two methods, one involving post-selection (detect a faulty magic state and discard it before it affects the computation) and another involving code switching. In the code-switching method, they used one quantum error-correcting code that is particularly friendly for magic state preparation (a small Reed-Muller code, which has transversal T gates) to create the state, then switched the encoded state into a more standard code (like the Steane or surface code) for the rest of computation. By leveraging each code’s strengths, they achieved magic state fidelities high enough to execute a logical T with error well below the physical error of any single qubit gate. In one case, they demonstrated that only 8 physical qubits (in a specific error-detecting circuit) were needed to distill a magic state with acceptable fidelity. They projected that using around 40 qubits for distillation would push the fidelity even further – still orders of magnitude lower overhead than conventional schemes that assumed thousands of qubits per state. This result was hailed by some experts as “the final missing piece” for scalable quantum computing – evidence that we now have all the primitives (logical qubits, Clifford gates, and magic-state-fueled T gates) working together on a small scale. It’s important to note these were still small-distance codes (the ion trap experiment used up to 15 qubits in a code switching protocol), but achieving a logical T gate with error better than physical is a huge validation of the fault-tolerance concept.

First magic state distillation on hardware (QuEra/Harvard-MIT, 2025)

Another breakthrough in 2025 came from a team using a neutral-atom quantum computer. In July 2025, QuEra (a startup specializing in neutral atom arrays) and academic collaborators reported in Nature that they had successfully performed magic state distillation on logical qubits. This is essentially the realization of the full magic state factory loop: they prepared several noisy encoded magic ancillas and ran a distillation circuit to produce one of higher quality, all within an error-corrected logical space. It was the first time the complete distillation protocol (proposed ~20 years prior) was actually executed with qubits protected by QEC. While done on a small code, this experiment demonstrated that neutral-atom platforms can handle the complex, multi-qubit controlled operations needed for distillation. A popular science article described this as removing “a theoretical barrier to scalability,” since it proves in principle that quantum computers can be “both error-corrected and more powerful (via magic states) than supercomputers”. In Yuval Boger (QuEra CCO)’s words: “Quantum computers would not be able to fulfill their promise without magic state distillation. It’s a required milestone.” Achieving it experimentally was a major validation of the overall architecture where logical T factories supply the computation.


Despite these exciting advances, it’s important to contextualize the Technology Readiness Level (TRL) of magic state production today. We might estimate it at TRL 2-3 (between concept formulation and proof-of-concept demo). The experiments so far have been small-scale and mostly one-off. No one yet has a “continuous” magic state factory running in hardware; we have only seen single-shot demonstrations and very short sequences. The fidelities, while above threshold in some cases, are still far from the $$10^{-12}$$ target error rates needed for, say, 1 billion T gates with high success. And scaling from a 5-15 qubit test to a factory using thousands of qubits is a huge leap. In summary, the concept has been proven in principle, and key pieces have been individually shown, but magic state generation remains one of the least mature capabilities in the CRQC roadmap (hence the “red” status indicator in capability assessments). The coming years will determine how quickly this can progress from laboratory experiments to a robust, engineering subsystem of a quantum computer.

Key Interdependencies and Challenges

Magic state production doesn’t exist in isolation – it sits on top of and depends on several other layers of the quantum computing stack. One major dependency is on the base quantum error-corrected platform. Before you can even attempt to distill a magic state, you need reliable logical qubits and high-fidelity Clifford operations to run the distillation circuits (which themselves are typically composed entirely of Cliffords). This means the error rate of your Clifford gates on encoded qubits must be below the distillation threshold, or else every round of distillation will fail to improve the state. In practice, that implies the underlying physical qubits and error correction must be performing well (e.g. logical error per op perhaps $$<1%$$ or so) before magic state factories make sense. It’s a bit of a bootstrapping problem: a quantum computer needs to be “good enough” at the easy operations in order to generate the resources for the hard operations.

Another interdependency is with the decoder and classical control system. Magic state injection is one of the moments in a quantum algorithm that require real-time conditional operations. For example, when injecting a T state, a certain measurement outcome will tell you that a corrective S (phase) gate is needed afterward (this is often called applying a “Pauli frame update”). The quantum control system must be able to interpret the measurement quickly (with the help of the decoder to know it’s a logical measurement of the ancilla) and then adjust subsequent operations. If decoding or classical feedback is too slow, the whole processor might have to sit idle waiting during each injection, reducing the effective gate speed. Thus, high-throughput magic state injection demands a tight integration between quantum and classical processing – fast decoders, low-latency classical logic for feed-forward, and careful synchronization with the QEC cycle. Any improvements in decoder speed and accuracy will directly benefit the ability to inject magic states seamlessly into the computation.

There is also a strong interplay with qubit overhead and error thresholds. Magic state distillation is only viable if the initial “raw” magic states have error rates below a certain threshold (otherwise distillation fails to converge). This threshold is typically on the order of a few tens of percent error for T states (e.g. ~15% error is often quoted for the Bravyi-Kitaev protocol threshold). Early experiments achieved magic state fidelity just above this threshold. As hardware improves, the raw magic states should get cleaner, which means fewer rounds of distillation are needed to reach a target fidelity. In an extreme scenario, if physical qubits became so reliable that a single injected state had error, say, $$10^{-6}$$ or better, one or two rounds of a small distillation circuit could yield the $$10^{-12}$$ fidelity states required. In fact, some researchers speculate that with techniques like magic state cultivation (an advanced form of distillation that operates continuously within a code), we might someday eliminate the need for large multi-round factories altogether. Cultivation, proposed by Gidney et al. (Google) in 2024, envisions growing a high-quality $$|T\rangle$$ state incrementally inside a surface code patch using operations only as costly as a standard CNOT gate. They showed in simulations that an error-corrected patch could autonomously amplify a T state to logical error rates around $$10^{-9}$$-$$10^{-11}$$ using far fewer qubits and time than prior distillation methods. Such approaches heavily depend on physical error rates: they “take off” once the hardware noise is below a certain point, underscoring that improving the underlying qubit quality makes magic-state generation exponentially easier.

The resource overhead itself creates interdependency decisions. Magic state factories consume a lot of qubits that otherwise could be used for data or parallelizing the algorithm. Designers of a full system must budget how many qubits to allocate to magic state production versus computation. If one can reduce the overhead per magic state (via better protocols), it frees up qubit budget for other uses or allows a smaller machine to achieve the same algorithm. This has driven a line of research into more qubit-efficient distillation. For example, in 2025 an approach using noise-biased qubits (from startup Alice & Bob) showed an 8-10× reduction in qubits and rounds needed per magic state by tailoring the distillation protocol to the hardware’s noise asymmetry. Their so-called “unfolded code” distillation produced a high-fidelity T-state using just 53 physical qubits (versus ~463 qubits for a comparable standard scheme) and in about 5 rounds (versus dozens). This kind of innovation is essentially trading off properties of one part of the system (a biased error rate in specific qubits) to gain advantage in another (magic state overhead). It highlights how deeply intertwined magic state production is with hardware characteristics. If your qubits have, say, 1000× higher Z error rate than X error rate (bias), you can design a more efficient distillation that corrects mostly the dominant error. Another example: a theoretical proposal from Osaka Univ. in 2025 introduced a “level-0 distillation” that performs some purification before full error correction. By cleverly filtering out error at the physical circuit level, they reduced the total space-time cost by up to 30× in simulations. These techniques depend on having flexible control of the qubits and perhaps sacrificing some generality (operating in a tailored way for just producing magic states). The takeaway is that progress in magic state tech often comes from co-design – adjusting error-correcting codes, physical qubit types, or circuit techniques to ease the burden of non-Clifford gate generation.

Finally, it’s worth noting alternative pathways that could lessen the need for magic state distillation – though each comes with its own challenges. One path is topological quantum computing using non-Abelian anyons (e.g. the Majorana zero modes or other anyonic quasiparticles). In some topological schemes, certain non-Clifford gates are achieved by braiding or other protected operations, essentially providing “built-in” magic without distillation. For instance, a topological code based on Fibonacci anyons can, in theory, perform a $$\pi/8$$ gate by a sequence of braids (hence no magic state ancilla needed). However, as of 2025 no one has yet demonstrated a non-Clifford via braiding in a lab – and the engineering to support stable anyons (such as Majorana modes) is itself in early stages. Another approach is special quantum error-correcting codes that inherently support a larger set of transversal gates. Recently, researchers found small examples of codes (e.g. an 11-qubit code) that allow a transversal T gate, breaking the usual Eastin-Knill no-go tradeoff at the cost of not being a CSS stabilizer code. These exotic codes might one day reduce the overhead by obviating some distillation, though typically they come with other overhead (complex encoding, no universality without switching codes, etc.). Continuous-variable (CV) quantum computing and certain photonic schemes sometimes tout the ability to perform non-Clifford operations through analog displacements or special state preparations (for example, the cubic phase gate in CV is a non-Clifford operation that could be injected via a specific optical state). If a photonic architecture (like that pursued by PsiQuantum or others) can generate large cluster states with certain non-Gaussian ancillas, they might effectively sidestep traditional magic state factories. However, those non-Gaussian ancillas are essentially “photonic magic states” by another name, and their production is also challenging. In summary, while mainstream quantum computing assumes magic state distillation as the route to universality, researchers are actively exploring whether improved hardware (topological qubits, biased noise qubits, etc.) or clever codes can mitigate the burden of magic states. Until such alternatives mature, the focus remains on making magic state production as efficient as possible within the prevailing QEC frameworks.

Outlook: Closing the Gap and How to Track Progress

Today, magic state production & injection is clearly the furthest-from-ready capability required for a CRQC (it’s often marked as a red light in readiness assessments). The gap to bridge is enormous: we must go from demonstrations on tens of qubits to factories using perhaps thousands or millions; from one-off injections to sustained delivery of high-fidelity states every few clock cycles. The good news is that the recent breakthroughs indicate a rapid maturation – we’re moving from theory to practice. Looking ahead, there are several milestones one can watch for to gauge progress in this capability:

Higher-distance magic state demonstrations

Thus far, logical magic states have been shown on distance-2 or 3 codes. A next step will be injecting and distilling magic states on larger codes (distance 5, 7, etc.), which should further suppress error rates. When we see experiments successfully produce a magic state in a distance-5 code with fidelity well above threshold, that will signal that error correction is improving to the point where magic states get even “cheaper” to refine. Google’s work with distance-5 color codes is a move in this direction, showing that as code distance grew, error rates dropped, which bodes well for magic state fidelity too.

Integrated magic state “factories” on prototype processors

In the next few years, researchers will likely attempt to run multiple rounds of distillation in succession on hardware. For example, a team might demonstrate a two-level distillation: produce some noisy T states, feed them into a distillation circuit, then maybe even repeat once more. Achieving two or three levels of distillation in a row on a logical circuit would basically be a miniature magic state factory in action. This will test the ability to do conditional logic and reuse qubits across rounds. QuEra’s neutral-atom experiment already did one round; a logical next target is doing a full factory that produces an output state significantly better than any single-shot state. Seeing a sustained output (even if at low rate) of magic states from a running quantum computer would be a breakthrough. It might initially be “I produced 10 magic states in sequence before the device lost coherence,” but that is already a step toward continuous operation.

Better error rates and fewer qubits per magic state

From the theoretical side, keep an eye on the “qubits per magic state” metric coming down. As noted, proposals like magic state cultivation and unfolded codes have slashed the overhead in simulations. The key question is whether these will translate to real hardware. If Alice & Bob (for instance) experimentally shows they can distill a magic state with just ~50 physical qubits and get error ~$$10^{-6}$$ on it, that would be a remarkable validation of low-overhead magic state generation. Likewise, if someone demonstrates the cultivation technique on a small surface code patch, showing that a logical T can be generated almost on-the-fly with minimal overhead, it could reshape how we design larger systems. Progress here might be reported in terms of achieved fidelity vs. number of qubits or cycles used. A trend to look for is error per T-gate dropping into the $$10^{-8}$$, $$10^{-9}$$ (and eventually $$10^{-12}$$) range in experiments, and the resource cost to get there (qubits and time) improving.

Alternate route successes

We should also watch companies pursuing topological qubits (like Microsoft’s Majorana approach) or other modalities to see if they demonstrate a non-Clifford gate without magic state distillation. If, say, a Majorana-based qubit design manages to braid a qubit into a T gate successfully, that could bypass the need for distillation (at least for that gate) – effectively solving the “magic” problem in hardware. Similarly, if a photonic quantum computer shows an on-demand high-quality non-Gaussian ancilla (the photonic equivalent of a magic state) being generated and used, that’s another form of this capability. While these are long shots, a surprise breakthrough in anyon braiding or bosonic codes could alter the landscape.


In terms of tracking progress, a reader can follow a few key indicators. One is the logical qubit error rates reported, especially for T or non-Clifford operations. As soon as labs report logical T gate error probabilities that are comparable to logical Clifford errors, that’s big news. Another indicator is the scale of distillation circuits run – papers might start reporting “we distilled a T state using 20 qubits in a [[8,3,2]] code” or similar. Also, keep an eye on the ratio of physical to logical qubits for a T state: early experiments were something like 17 physical for 1 logical qubit plus one magic ancilla; future ones might be 100 physical for a full factory outputting one state at a time, and so on. Roadmaps from companies can be telling too – some now explicitly mention plans for magic state delivery. For example, Alice & Bob’s roadmap mentions on-chip magic state factories by their next phase. If IBM, Google, or Quantinuum start including magic-state benchmarks (like “non-Clifford throughput”) in their progress reports, that will signify that this capability is moving from research toward implementation.

Finally, it’s worth noting the cascading impact this capability has: once robust magic state injection is in hand, it unlocks running any quantum algorithm on logical qubits. It will convert quantum computers from “Clifford-only toy calculators” into fully programmable, universal machines. As such, the achievement of a practical magic state factory will likely be seen as a watershed moment in quantum computing – comparable to achieving the first logical qubit or demonstrating quantum advantage. In conclusion, Magic State Production & Injection is the critical enabler for universal and cryptographically relevant quantum computing. The path to get there is challenging, but recent strides give cause for optimism. By keeping an eye on the developments highlighted above, one can gauge how quickly we are closing the magic state gap and approaching the era of true fault-tolerant quantum processors. Each incremental improvement – a higher fidelity here, a fewer-qubit protocol there – chips away at what was once viewed as an almost prohibitive obstacle. The “magic” is gradually transitioning from theory to reality, bringing Q-Day closer with each breakthrough.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap