Lattice Surgery
Table of Contents
Introduction
Quantum computing promises to solve complex problems far beyond the reach of classical machines, but today’s quantum hardware is plagued by short-lived qubits and error rates that make long computations infeasible. Quantum error correction (QEC) is essential to stabilize qubits and enable fault-tolerant quantum computing. One of the leading QEC approaches is the surface code, a topological error-correcting code known for its high error threshold (around 1% in some implementations) and compatibility with 2D hardware layouts. Surface codes “tile” qubits on a 2D grid with local interactions, providing a robust way to detect and correct errors. However, a single surface-code patch encodes only one logical qubit, and naively performing operations between two such encoded qubits can be challenging without breaking the locality and error-protection of the code. This is where lattice surgery comes in.
Lattice surgery is a technique that allows multiple encoded qubits (each on its own surface-code patch) to interact by merging and splitting their lattices in a carefully choreographed way. In essence, lattice surgery lets us “stitch together” quantum patches to perform multi-qubit operations while preserving error correction. This concept is crucial for scaling up quantum computers, but it can be difficult to grasp without diving into quantum error correction theory.
Background: Surface Codes and the Challenge of Multi-Qubit Gates
To understand lattice surgery, it helps to first review the basics of surface codes and the problem they solve. The surface code is a type of topological QEC code derived from Kitaev’s toric code concept. Instead of a torus, the planar surface code is implemented on a flat 2D grid of physical qubits with boundary edges (no need for a donut-shaped layout). Qubits are arranged such that some measure joint parity (stabilizers) of their neighbors. There are two types of stabilizer checks: one set of checks (often visualized as X-stabilizers) detects bit-flip errors, and another set (Z-stabilizers) detects phase-flip errors. By repeatedly measuring these stabilizers, errors on any single physical qubit can be detected and corrected without observing the encoded logical state directly.
Each surface-code patch typically encodes one logical qubit in a distributed way across many physical qubits. The logical qubit’s state is stored non-locally: for example, a string of Pauli-$$X$$ operations extending across the patch in one direction might represent the logical $$X_L$$ operator, while a string of Pauli-$$Z$$ operations across the orthogonal direction represents logical $$Z_L$$. The edges of the surface code patch come in two varieties, often called “smooth” and “rough” boundaries, corresponding to the two types of stabilizers that terminate at the edges. In surface code terminology, X-boundaries (edges where an $$X$$-type stabilizer would be cut off) are known as smooth boundaries, while Z-boundaries are called rough boundaries. These boundaries define how logical operators traverse the patch: any chain of $$Z$$ operations connecting two rough boundaries constitutes a logical $$Z_L$$ operation, and similarly a chain of $$X$$ operations between smooth boundaries acts as logical $X_L$. This will be important when we discuss different types of lattice surgery (merging along rough vs. smooth sides).
Critically, surface code patches are designed for nearest-neighbor interactions only, which aligns well with physical hardware constraints (e.g. superconducting qubits laid out on a chip, or spins on a 2D plane). This locality is a strength for error correction, but it makes multi-qubit logic gates non-trivial. If you have two logical qubits encoded on separate patches of a surface code, how do you perform, say, a CNOT gate between them? In principle, one could imagine braiding – an operation from topological quantum computing where encoded excitations (anyons or defects in the code) are moved around one another to enact logic. Early surface-code proposals did exactly that: they introduced “defects” or holes in the lattice that could be braided around each other to implement logical gates. However, braiding defects requires a lot of extra qubits and time, essentially making large loops in the code to move information around. As a result, defect braiding, while conceptually valid, is resource-intensive. By 2012, researchers were looking for more efficient alternatives that would preserve the strictly 2D local operations of the planar code without the overhead of dragging defects around.
Lattice surgery was introduced as that alternative. The idea, first published in a 2012 paper by Dominic Horsman, Austin Fowler, Simon Devitt, and Rodney Van Meter, was to directly couple two planar code patches along a boundary, perform a joint measurement between them, and thus achieve the same effect as a multi-qubit gate without braiding or long-range interactions. In their words, it is a “new technique enabling the coupling of two planar codes without transversal operations, maintaining the 2D-nearest-neighbor structure… [it] comprises splitting and merging planar code surfaces, and enables us to perform universal quantum computation (including magic state injection) while removing the need for braided logic in a strictly 2DNN design”. In simpler terms, lattice surgery treats each logical qubit’s surface as a piece of quantum fabric that can be stitched together or torn apart along the seams (the boundaries) to mediate interactions. When done carefully, this stitching doesn’t undermine the error correction – it effectively extends the code across qubits during an operation and then splits it again when the operation is done.
The challenge of multi-qubit gates thus transforms into a measurement problem: how do we measure a joint property of two distant logical qubits using only local operations? Lattice surgery’s answer is to merge the two qubit patches so they temporarily become one larger patch, perform a joint stabilizer measurement that spans both original patches, and then split them back into separate patches. All of this is executed via local stabilizer measurements on the boundary qubits between the patches. This approach keeps the interaction local (no qubit has to physically travel across the chip; information is teleported via measurement), and it keeps the qubits within the protective umbrella of the surface code throughout the operation.
Lattice Surgery Basics: Merging and Splitting Qubit Patches
At the heart of lattice surgery are two primitive operations: merge two surface-code patches, or split one patch into two. These are indeed like the surgical procedures they sound like – cutting and suturing a quantum surface.
- Merging: To merge two logical qubits, we take two separate surface-code patches and join them along a boundary. Imagine two square patches of qubits sitting side by side. To merge them, we introduce a row of auxiliary physical qubits between the patches – these are often called transitional qubits – and we entangle them with the edges of the two patches. By initializing the transitional qubits in a specific state and then performing the surface code’s stabilizer measurements across the new combined boundary, we effectively measure a joint operator that spans both logical qubits. There are two types of merges, corresponding to which kind of logical operator is being jointly measured:
- Rough Merge: If we merge along the rough boundaries of the patches, we initialize the intermediate qubits in the $$|0\rangle$$ state (an eigenstate of Pauli-$$Z$$). This procedure couples the patches through their Z-stabilizer checks. In effect, a rough merge results in a measurement of the joint logical $$Z$$ parity of the two qubits (i.e. measuring $$Z_L^{(1)} \otimes Z_L^{(2)}$$) across the merged boundary. The two separate surfaces become one combined surface, and the two qubits are now entangled as one larger logical qubit (their prior individual identity is lost except for the parity outcome). Any pre-existing difference in their $$Z_L$$ eigenvalues is detected by this measurement. The outcome (±1) tells us the parity of the two qubits’ logical Z values and is recorded for later use (since a -1 outcome might require a corrective operation). Because this merge involved the rough (Z-type) edges, it is called a rough surface merge.
 - Smooth Merge: If we instead merge along the smooth boundaries, we prepare the transitional qubits in the $$|+\rangle$$ state (an eigenstate of Pauli-$$X$$). Then the merging procedure involves the X-stabilizer checks, resulting in a joint measurement of the two qubits’ logical $$X$$ parity (i.e. $$X_L^{(1)} \otimes X_L^{(2)}$$). This is known as a smooth surface merge. Just like the rough merge, a smooth merge unifies the two patches into one, but it correlates their $$X$$ components instead of $$Z$$. The outcome gives the parity of the logical $$X$$ values of the two qubits. In both cases, after a merge, what were two logical qubits have effectively become one combined code block, but with a known relationship (the measured parity constraint) between the original qubit states.
 
 
Merging is analogous to performing a parity measurement on the two qubits. If you think in simpler terms, it’s as if we measured whether the two logical qubits are the same or different in a certain basis (Z basis for rough merge, X basis for smooth merge). Importantly, a merge entangles the qubits – unless the measurement outcome indicates a specific parity, the post-merge state is a superposition consistent with that parity. This is how lattice surgery generates the entanglement needed for multi-qubit gates.
- Splitting: The inverse operation is splitting one patch into two. Lattice splitting is done by intentionally measuring a line of qubits down the middle of a single logical patch, effectively cutting it into two separate patches. To perform a split, we decide which basis to measure that dividing line of qubits in:
- Rough Split: Measure the dividing qubits in the $$X$$ basis (i.e. measure Pauli-$$X$$ on each of the qubits along the cut)arxiv.org. This severs the patch along what were rough boundaries. A rough split creates two new patches whose Z-boundaries are formed where the cut was made. In other words, it splits the code along a line that was previously enforcing a logical $$Z$$ connection. After measuring out those qubits (and treating any detection of errors appropriately), we end up with two independent surface code patchesarxiv.org. A rough split corresponds to breaking a merged patch apart along a rough edge, producing two qubits in possibly entangled or product states depending on the prior state.
 - Smooth Split: Measure the dividing qubits in the $$Z$$ basis (measuring Pauli-$$Z$$ on each) to cut along smooth boundaries. This creates two patches separated by what becomes a new smooth edge. A smooth split is essentially the inverse of a smooth merge.
 
 
In practice, splitting is used after a merge (or sequence of merges) to disentangle qubits or to redistribute entanglement. It’s worth noting that the act of splitting can temporarily reduce the error protection of the code – e.g. measuring a full row of qubits can momentarily lower the effective code distance at the cut. But after the split, each of the two resulting patches can again undergo rounds of error-correcting checks and, if needed, be “grown” to a larger size to restore higher fidelity. The combination of merges and splits, done in the right order, allows quantum information to teleport between patches and for logical operations to propagate.
Once a merge (parity measurement) is done, the qubits have been entangled or combined, and one typically follows up by splitting to separate them again (unless the goal was to fuse them permanently). The measurement outcomes from merges (and sometimes additional single-qubit measurements from splits) may dictate Pauli corrections that we need to apply to keep the logical states deterministic. These corrections are simple flips or phase flips done in software – an accepted part of any measurement-based quantum gate protocol. The remarkable thing about lattice surgery is that all the heavy lifting is done via these measurements; at no point do we require a direct controlled-unitary gate between distant physical qubits. Everything remains local.
Performing Gates with Lattice Surgery: Example of a CNOT
Multi-qubit gates, especially the controlled-NOT (CNOT) gate, are fundamental for universal quantum computing. In a surface code architecture, a logical CNOT between two qubits can be realized through a sequence of lattice surgery operations. The general strategy is to use a temporary ancilla patch (often called a transitional logical qubit) that interacts with the two qubits in sequence, ferrying the entanglement from the control to the target. This is conceptually similar to performing quantum teleportation of a gate.
One known protocol for a lattice-surgery CNOT goes as follows:
- Start with two logical qubit patches: a control qubit $$C$$ (in an arbitrary state) and a target qubit $$T$$ (often initialized to $|+\rangle$ for a CNOT, since we want to flip it conditional on $$C$$ being $$|1\rangle$$). Prepare a third patch, the ancilla (or transitional) patch $$A$$, in a known state (typically $$|+\rangle$$, a logical plus state). The ancilla is like a blank scratchpad qubit that will mediate the interaction.
 - Perform a merge between the control $$C$$ and ancilla $$A$$ in the appropriate basis. Different sources describe slightly different ordering (some do a rough merge first, others a smooth first); one convenient ordering is to do a smooth merge between $$C$$ and $$A$$. A smooth merge means we’re measuring $$X_L^C \otimes X_L^A$$, entangling the control’s $$X$$ state with the ancilla. The reason a smooth merge is useful as the first step is that it can create a correlated phase state without collapsing the control’s computational-basis information (it “conserves the phase relationship, necessary for the control functionality” in the CNOT context). After this merge, $$C$$ and $$A$$ share a known $$X$$-parity relation.
 - Next, we split $$C$$ and $$A$$ apart again. Typically this is a smooth split (undoing the smooth merge) that disentangles the ancilla from the control but leaves a record in the ancilla of the control’s state parity. Now the control qubit is back to being an independent patch (ideally still in its original state, although now we have a parity bit of information stored in $$A$$), and the ancilla $$A$$ carries the entanglement that involves the control.
 - Now, merge the ancilla $$A$$ with the target $$T$$ in the other basis (if you did smooth with the control, you would do a rough merge here, or vice versa). In our example, since we did a smooth merge first, the second merge can be a rough merge between $$A$$ and $$T$$. This would measure $$Z_L^A \otimes Z_L^T$$, effectively coupling the ancilla’s information into the target’s $$Z$$ basis. At this point, the target qubit $$T$$ will get flipped or not flipped depending on the state that was entangled onto $$A$$ from the control. In fact, after this merge, the result is that a CNOT from $$C$$ to $$T$$ has been accomplished: if the control was in state $$|1\rangle$$ (logical 1), the target has been flipped into the orthogonal state; if the control was $$|0\rangle$$, the target remains unchanged. This is exactly the truth table of a CNOT gate (up to Pauli frame adjustments due to the measurement randomness).
 - Finally, there may be a leftover entanglement between $$A$$ and $$T$$ (or an unused ancilla) that we can clean up. Often the protocol involves measuring the ancilla qubit $$A$$ in a certain basis to collapse any remaining entanglement and remove $$A$$ from the picture, or performing another split. After all merges and splits, we end up with the control $$C$$ and target $$T$$ as separate logical qubits again, but the target’s state has been flipped iff the control was $$|1\rangle$$ – which is exactly what we want from CNOT. Any random measurement outcomes that occurred along the way (there would have been two joint parity outcomes and possibly one single-qubit outcome) are tracked and, if needed, corrected by Pauli gates on $$C$$ or $$T$$ (these are simple software corrections that ensure the logical operation is deterministic).
 
This description might seem elaborate, but importantly, it uses only joint measurements and single-qubit measurements on the lattice – no coherent two-qubit unitary gate ever directly touches the logical qubits. The whole CNOT is completed in a few rounds of the surface code cycle. In fact, it’s known that a lattice-surgery CNOT can be done with two joint parity measurements and one single-qubit measurement, which is quite efficient. All intermediate steps maintain fault-tolerance; at worst, a failed measurement or an error during the process is caught by the code’s syndrome checks.
It’s worth noting that other single-qubit Clifford gates, like a logical Hadamard or Phase (S) gate, can often be done by transversal physical operations on the patch (i.e. applying a physical Hadamard to every qubit in the patch achieves a logical Hadamard) – but even there, lattice surgery can play a role in adjusting the code orientation or performing certain gates more efficiently. And for non-Clifford gates like the $T$ (pi/8) gate, lattice surgery provides a way to inject magic states into the computation: one prepares an encoded ancilla qubit in the appropriate magic state and then uses joint measurements to combine it with the data – essentially teleporting the $T$ gate onto the qubit. Lattice surgery is flexible enough to handle these state-injection protocols as well, which are critical for universal quantum computing.
Advantages of Lattice Surgery
Why go through all this trouble of merging and splitting codes? The lattice surgery approach offers several key benefits for quantum computing architectures:
- Preserves Locality and 2D Architecture: Lattice surgery avoids the need to shuttle qubits around or create long-range connections. All interactions are mediated by local stabilizer measurements on neighboring qubits. The original lattice-surgery proposal explicitly sought to “maintain the 2DNN (two-dimensional nearest-neighbor) nature” of the surface code and remove the need for braiding. This is vital for many hardware platforms (like superconducting qubits or quantum dots) where physical connectivity is limited to adjacent qubits on a chip. By keeping everything local, lattice surgery makes it easier to design a hardware layout and control system – you don’t need swap networks or flying qubits to enact long-distance gates, you just need the ability to measure multi-qubit stabilizers on patches.
 - Resource Efficiency: Perhaps the most celebrated advantage, lattice surgery can significantly reduce the overhead (number of physical qubits and operations) required for quantum algorithms when compared to braiding defects. A 2019 study by Fowler and Gidney quantified this: when using lattice surgery in a large-scale surface code computer, the storage overhead for logical qubits could be cut by a factor of >4, and the magic-state distillation overhead (needed for T gates) by nearly a factor of 5, compared to approaches that rely on defect braiding. In real terms, they found that an algorithm needing $$10^8$$ T-gates could run on about $$3.7\times10^5$$ physical qubits with lattice surgery, whereas braiding-based methods would need substantially more. These numbers “strongly suggest that defects and braids… should be deprecated in favor of lattice surgery” in future designs. In short, lattice surgery achieves the same logical operations with fewer qubits, because you don’t need large braided loops or extra ancilla patches sitting around to move information. Everything happens by momentarily extending patches into each other, which is a relatively qubit-efficient process.
 - Conceptual Simplicity (at Scale): While lattice surgery involves some complex sequences, it actually simplifies the conceptual picture of a quantum computer at scale. Researchers like Daniel Litinski have shown that you can describe an entire large-scale quantum computer as a collection of tiled surface-code patches playing a “game” of merges and splits, without ever invoking exotic anyon braiding diagrams. In a popular 2019 paper, Litinski introduced a framework where logical qubits are tiles and all operations are described by a small set of lattice-surgery rules – essentially like a tile-based game with a few rules that can implement any quantum circuit. This high-level description makes it easier for quantum architects to design and optimize processes, and it demystifies the operation of a quantum computer for those not deeply versed in topology. Instead of imagining dragging holes around (which is hard to visualize in hardware), one imagines combining and splitting patches via measured connections, which is much more straightforward to translate into control sequences on actual qubits.
 - Compatibility with Error Correction Protocols: Lattice surgery is designed to work within the error correction cycle of the surface code. All of its operations (merge, split) are essentially sequences of stabilizer measurements, which means they can be incorporated into the same periodic syndrome extraction process that is running to correct errors. This means gates via lattice surgery can be done while the code is actively correcting errors, preserving fault-tolerance. The measurement outcomes from the surgery just become additional pieces of data that the classical decoder can use (both for error correction and to determine logical gate results). In other words, lattice surgery doesn’t interrupt or disable the QEC; it cooperates with it. This is a huge practical advantage: even as you perform logical gates, the code is watching for errors and keeping them at bay.
 - Extensibility to Other Codes: Although developed for the surface code, the general concept of lattice surgery – joint measurements to couple codes – has inspired similar techniques in other topological codes. Researchers have proposed, for example, lattice surgery for color codes and other quantum error-correcting codes, aiming to leverage the idea of merging/splitting in different contexts. The surface code remains the most prominent setting for lattice surgery, but this cross-pollination shows the idea’s versatility.
 
No discussion of advantages is complete without mentioning potential challenges: Lattice surgery’s reliance on high-fidelity measurements means that it demands a fast and accurate classical feedback loop. After a merge, you often need to apply a correction based on the parity outcome; if your measurements are slow or noisy, that could be a vulnerability. Fortunately, in many experimental platforms measurement and feed-forward have become quite efficient (for instance, superconducting qubit systems now regularly incorporate real-time feedback). Another subtle point is that optimizing a sequence of lattice-surgery operations for a complex circuit can be algorithmically hard – in fact, finding the optimal scheduling of merges/splits has been shown to be an NP-hard problem in general. Researchers are actively developing compilers and heuristics to tackle this optimization so that we can get the most out of lattice surgery with minimal delay or idle qubits.
Historical Development and Key Contributors
The concept of lattice surgery sits at the intersection of quantum error correction and topological quantum computing, and its development has been a collaborative effort over the past couple decades:
- Topological Code Foundations (late 1990s – 2000s): The groundwork comes from Alexei Kitaev’s toric code (1997) and the broader idea of anyonic quantum computation. Kitaev showed that you could encode qubits into the global degrees of freedom of a 2D lattice with periodic boundary conditions (a torus) and perform operations by braiding anyonic excitations. In the early 2000s, researchers like Eric Dennis, David DiVincenzo, John Preskill, and others (including Rodney Bartlett and Michael Freedman at Microsoft) explored planar versions of these codes – introducing the idea of boundaries (rough and smooth) so that the code could be laid out on a plane rather than a torus. By the mid-2000s, the surface code (a planar variant of the toric code) emerged as a leading QEC candidate due to its high error threshold and the fact that it only needs local operations. Notably, a 2012 paper by Austin Fowler et al. in Phys. Rev. A established practical aspects of surface codes, including the rotated surface code geometry which reduces the number of qubits needed for a given error-correcting capability. Fowler and colleagues also demonstrated through simulations that error rates of 1% or even higher could be tolerated, sparking immense interest in the surface code for real hardware.
 - The 2012 Lattice Surgery Proposal: The specific notion of “lattice surgery” was introduced by Dominic Horsman, Austin G. Fowler, Simon Devitt, and Rodney Van Meter in a 2012 paper titled “Surface code quantum computing by lattice surgery.” This work was a milestone, as it coined the term “lattice surgery” and laid out the framework of splitting and merging surfaces to execute gates. They showed explicitly how a CNOT gate could be performed by merging two patches into one and then splitting, rather than by braiding, and they included magic state injection for universality. One impressive claim in that paper was that an encoded CNOT between two distance-3 logical qubits could be done with just 53 physical qubits total (including ancillas), half the number that would be required by the best alternative method at the time. The paper’s publication in New Journal of Physics gave the approach legitimacy and visibility. The key contributors here – Horsman, Fowler, Devitt, Van Meter – each brought certain expertise: Fowler in particular was a driving force behind surface code implementations, and Rodney Van Meter’s group was looking at quantum architectures (Van Meter and Devitt were involved in quantum network and architecture research). Dominic Horsman’s involvement brought a fresh perspective that tied the concepts together. This team essentially set the direction for the next decade of fault-tolerant quantum computing research.
 - Refinements and Related Techniques (2013–2018): In subsequent years, researchers expanded on lattice surgery. Some looked at performing lattice surgery with twist defects (a variant involving twists in the code lattice that can reduce the number of steps for certain operations), while others like Craig Gidney and Austin Fowler worked on optimizing the overhead. Fowler, moving to Google’s Quantum AI team, became a vocal proponent of lattice surgery as the preferred methodology. By 2018, Fowler and Gidney published the “Low overhead quantum computation using lattice surgery” paper we mentioned, strongly advocating that the community move away from braiding defects because lattice surgery was so much more resource-efficient. Their work provided concrete numbers and helped persuade many that lattice surgery wasn’t just a theoretical curiosity but rather the strategy that large-scale quantum computers should employ. During this period, tools like the ZX-calculus were also applied to lattice surgery, providing a high-level graphical way to reason about the needed measurements and corrections. Researchers at ETH Zurich and elsewhere developed compiler techniques to translate arbitrary quantum circuits into sequences of lattice surgery operations (since any circuit can be broken down into parity checks and teleportations at the logical level).
 - Accessible Expositions (2019–2025): As the field matured, there was a push to make lattice surgery more understandable to newcomers. Daniel Litinski’s 2019 paper “A Game of Surface Codes” is a standout example of communicating lattice surgery in a playful yet rigorous way. He showed that one can avoid the jargon of anyons entirely and teach lattice surgery as a kind of board game with rules for merging and splitting tiles – a perspective that has been influential in education and even in developing software tools. More recently, in 2025, Chatterjee et al. published “Lattice Surgery for Dummies”, a tutorial-style overview aiming to demystify the topic for a broader audience. This reflects the growing need to train more engineers and scientists in fault-tolerant quantum computing now that experimental quantum processors are reaching the logical qubit era.
 - Experimental Milestones: For a long time, lattice surgery was purely theoretical. But very recently, experimental groups have started demonstrating the first small instances of lattice surgery. In 2021, a team led by Thomas Monz, Rainer Blatt, and others in Innsbruck reported the first experimental realization of lattice surgery on an ion-trap quantum computer. They used 10 trapped-ion qubits to encode two logical qubits (using a simple version of the surface code or a closely related error-detecting code) and successfully merged and split them to create entanglement and even teleport a logical state – effectively performing lattice surgery operations in hardware. This was published in Nature as a demonstration of entangling logical qubits via lattice surgery. Around the same time, the quantum computing company Quantinuum (which uses trapped-ion technology) announced a similar achievement, calling it the first step toward fault-tolerant logical gates in a real device. There have also been experiments with superconducting qubits: for example, in 2023, researchers showed the splitting of a distance-3 surface code into two distance-3 patches (a primitive lattice surgery move) on a superconducting platform. These are still early days – the demonstrated operations are on very small codes (distance 2 or 3, just enough to detect errors, not fully correct them) – but they are important proofs of concept. They show that lattice surgery is not just a theoretical idea but something we can begin to implement as quantum hardware scales up. We can expect to see more experiments in the next few years where small logical qubits are entangled and used to perform logical operations via lattice surgery on various platforms, be it superconductors, ions, or other qubit technologies.
 
In terms of people, Austin Fowler deserves special mention as a key contributor; he was involved in the original lattice surgery concept and has been instrumental in many follow-up improvements and advocacy for the approach. Simon Devitt and Rodney Van Meter also contributed to the initial idea, blending concepts from quantum computing and network architecture. Daniel Litinski helped translate the concept into a more digestible form, which has influenced software tools. On the experimental side, groups led by Rainer Blatt/Thomas Monz (Innsbruck) and Hartmut Neven/Julian Kelly (Google Quantum AI), among others, are driving the first real implementations. As of 2025, lattice surgery is a vibrant research area, and it’s moving rapidly from theory into practice.
Outlook
Lattice surgery has proven itself as a linchpin of modern quantum computing architecture. It embodies the principle of doing more with less – performing complex multi-qubit operations with only local actions and clever use of measurements. The technique will likely play a central role in any large-scale quantum computer that uses error-correcting codes, especially since the surface code remains a top contender for error correction in superconducting qubits, trapped ions, and other platforms.
There are still open challenges and active research topics related to lattice surgery. One is optimization: figuring out the best sequences of merges and splits to perform a given quantum algorithm with minimal delay and qubit overhead. As mentioned, this can be a hard problem, but progress is being made with automated compilers that can take a high-level quantum circuit and compile it down to a schedule of lattice surgery operations. Another active area is finding ways to speed up the distillation of magic states using lattice surgery, since state injection is a bottleneck for many algorithms – techniques like simultaneous multi-qubit merges might offer some speedups. Additionally, researchers are exploring hybrid techniques that combine lattice surgery with small braid-like operations or other code deformation tactics to see if further resource savings are possible.
The conceptual clarity provided by lattice surgery is also influencing how we think about modular quantum computing. Because lattice surgery operations resemble a form of quantum communication (teleporting information between patches), they could be used in distributed or modular quantum computing setups where you have separate chips or modules that need to interact. In fact, one can imagine in the future connecting two quantum processors by “virtual lattice surgery” – joint measurements via entanglement links – to form a larger logical quantum computer.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.