Post-Quantum

Quantum Computer Factors Record 48-Bit Number – How Far Are We from Cracking RSA-2048?

Quantum computers have long been theorized as a threat to modern encryption because of their potential ability to factor large numbers exponentially faster than classical machines. In recent years, experimental progress in quantum factoring has accelerated. The largest number ever factored using quantum computing methods has now reached 48 bits – a 15-digit integer – using a novel hybrid quantum-classical approach. This is a significant leap beyond the small examples (like 21) that were previously demonstrated on quantum devices. Yet, the resources and techniques used for these modest numbers shed light on just how far we still are from factoring cryptographic RSA keys (typically 2048 bits long) in practice.

Early Quantum Factoring Feats: From 15 to 21 (Fully Quantum Methods)

The journey toward large-number factoring on quantum computers began with very small integers, tackled by pure quantum algorithms (notably Shor’s algorithm). Shor’s algorithm, proposed in 1994, theoretically allows a quantum computer to factor an n-bit integer in polynomial time – an ability that would undermine RSA encryption. However, implementing Shor’s algorithm on actual hardware is extremely demanding. Early demonstrations in the 2000s and 2010s could only handle tiny numbers, often with additional help from classical computation or prior knowledge. Key milestones included:

  • 2001 (NMR Quantum Computer) – A team led by IBM and Stanford factored $$15 = 3 \times 5$$ using a 7-qubit nuclear magnetic resonance quantum device. This was the first realization of Shor’s algorithm, albeit in a non-standard quantum platform (liquid-state NMR). It showed the algorithm worked for the smallest non-trivial example, but it wasn’t a scalable solution.
  • 2012 (Photonic Qubits) – Researchers factored $$21 = 3 \times 7$$ using a compiled version of Shor’s algorithm with four photonic qubits. This experiment used a simplified optical circuit to perform the quantum order-finding subroutine. Around the same time, $$35 = 5 \times 7$$ was also factored in a photonic system, demonstrating another incremental step. These photonic experiments were impressive, but they relied on “compiled” circuits tailored to the specific number and assumed some prior knowledge to reduce the resource requirements.
  • 2016–2018 (Superconducting Qubits) – Small integers like 15, 21, and 35 were factored on early superconducting quantum processors (the kind of quantum chips IBM and Google use). For example, in 2021 researchers used IBM’s 5-qubit quantum cloud computer to successfully run a complete Shor’s algorithm for N=21 without cheating, leveraging optimized approximate gates to fit within the hardware limitations. They verified genuine quantum entanglement in the process – a key ingredient for quantum speedup. Factoring 21 may sound trivial (your laptop can do it instantly), but doing it via Shor’s algorithm on real qubits was an important proof-of-concept.
  • Trapped Ions and Others – Trapped-ion quantum computers (another leading platform) also joined the fray. Experimentalists demonstrated factoring of small numbers like 15 and 21 on ion-trap systems as well. Each platform – superconducting circuits, trapped ions, photons, NMR – helped verify that Shor’s algorithm (or slight variants of it) works, and each revealed the challenges of qubit noise, limited coherence time, and circuit depth that future quantum computers must overcome to factor larger numbers.

Notably, all these fully quantum demonstrations were limited to very small toy numbers (well under 6 bits). The largest integer factored purely with Shor’s algorithm on actual quantum hardware is often cited as 21, or 35 in some photonic experiments, far, far short of breaking modern cryptography. Each required heroic efforts in circuit optimization or “compiling” the algorithm with extra classical insight. The quantum computers simply didn’t have enough qubits or were too error-prone to run the full general algorithm for anything larger at the time.

Hybrid Quantum-Classical Breakthroughs: Pushing into the 18–48 Bit Realm

Given the difficulty of running Shor’s algorithm directly on noisy intermediate-scale quantum (NISQ) devices, researchers began exploring hybrid approaches. These methods leverage both quantum and classical computing, reformulating the factoring problem into a form more amenable to shallow quantum circuits. In essence, they offload part of the work to classical pre- or post-processing, and use quantum bits to handle a crucial sub-problem (such as searching for a solution in an optimization landscape). Here are a few key developments in hybrid quantum factoring that have extended the reach beyond what straight Shor implementations have achieved:

Quantum Annealing (2017–2018) – Scientists mapped small factoring problems onto the Ising models solved by quantum annealers like the D-Wave system. Instead of doing period-finding, this approach turns factoring into a combinatorial optimization: essentially, the annealer tries to find the assignment of bits (the factors’ binary digits) that minimize an energy function encoding the multiplication constraints. Using quantum annealing, numbers up to 8 digits (around 18 bits) have been factored. For example, in 2018 a D-Wave quantum annealer was used to factor integers such as 15, 143, 59989, and 376289, largest of which is an 18-bit, 6-digit number. While an 18-bit semiprime is trivial for classical algorithms, this showed that quantum annealing (an analog form of quantum computing) could handle slightly larger structures than Shor’s algorithm on gate-based devices at that time. (It’s worth noting that annealers require many physical qubits to represent even a small problem – the D-Wave machine had over 2000 qubits available, but most were involved in encoding the problem redundantly due to its sparse connectivity.)

Variational Quantum Factoring (2019–2021) – A more general strategy emerged: formulate factoring as an optimization problem (a la SAT-solving or QUBO) that a gate-based quantum computer can attempt to solve using hybrid algorithms like the Quantum Approximate Optimization Algorithm (QAOA) or Variational Quantum Eigensolver (VQE). In 2014, researchers showed that a 4-qubit quantum processor (with extensive classical help) could inadvertently factor larger numbers like 143 and even 56153 by cleverly structuring the problem, although those demonstrations were not fully general or efficient. More systematically, in 2021 a team led by Karamlou et al. used an IBM superconducting quantum chip to run a variational factoring algorithm (VQF) on several larger numbers. They managed to factor a 41-bit integer, N = 1,099,551,473,989 (over a trillion), using only 3 qubits in the quantum processor. How is that possible? Essentially, they preprocessed the problem classically to reduce the search space dramatically – 24 rounds of a classical pruning algorithm cut the 41-bit factoring problem down to a sub-problem involving just 3 quantum bits. The quantum chip then ran a QAOA circuit (with up to 8 layers of alternating operations) to find a solution, and indeed it found the correct factors (which turned out to be 1,048,589 and 1,048,601, two 20-bit primes). They also factored 3127 (four qubits used) and 6557 (five qubits) in similar fashion as intermediate examples. These results were hybrid in every sense: the heavy lifting of narrowing down possible factors was done by classical algorithms, and the quantum device was used as a sort of accelerator to finish solving a tiny piece of the puzzle. While this approach is not efficient for general large numbers (the amount of classical pre-processing needed can blow up for larger inputs), it demonstrated that quantum computers can team up with classical routines to tackle bigger integers than either could alone at this stage.

Adiabatic Quantum Algorithms (NMR) – In parallel, physicists continued pushing analog quantum methods. An NMR quantum simulator in 2017 successfully factored a 19-bit number (291,311) using an adiabatic process. This was noteworthy as it exceeded the “Shor barrier” of ~21 bits using a completely different quantum strategy. Again, classical insight was used to set up the problem, and the quantum system found the answer by evolving its spins slowly to reach the ground state encoding the factors. It underscored that quantum factoring need not mean strictly running Shor’s algorithm – there are many possible pathways to the same end.

These hybrid and alternative approaches kept raising the bar of the largest integer factored with quantum help. Still, until recently, the record hovered around 20 to 30 bits for years, and those larger (30–40 bit) cases leaned heavily on classical computations to succeed. That’s why a late-2022 announcement of a 48-bit number being factored using only 10 qubits caused a stir.

The 48-Bit Milestone: Factoring a 15-Digit Number with 10 Superconducting Qubits (SQIF Algorithm)

In December 2022, a group of Chinese researchers led by Bao Yan and collaborators posted a provocative preprint on arXiv, claiming a breakthrough in quantum factoring. They introduced an algorithm they call Sublinear Quantum Integer Factorization (SQIF), and reported that they had factored a 48-bit integer (specifically N = 261,980,999,226,229) using a 10-qubit superconducting quantum processor. This number is about $$2.6×10^{14}$$ (in the hundreds of trillions) – far larger than any previous integer factored predominantly with quantum hardware. For context, 48 bits is a 15-digit decimal number; by comparison RSA-2048 is ~617 digits, so there’s still a huge gap, but 48-bit quantum factoring is well beyond the 4-5 digit numbers that were the previous state of the art.

Algorithm – a Lattice/QAOA Hybrid: The SQIF algorithm is very different from Shor’s algorithm. Instead of using quantum Fourier transforms to find the period of a modular exponential function, SQIF leverages a lattice-based approach. In essence, the authors mapped the factoring problem to the Closest Vector Problem (CVP) on a lattice. This is striking because CVP (and related lattice problems) are at the heart of several post-quantum cryptographic schemes – an ironic twist where a technique from the world of post-quantum crypto is being used to attack the pre-quantum RSA problem. The algorithm builds on a variant of Schnorr’s lattice-based factoring method, which tries to find a sufficiently short combination of multiples of the target number and known approximate roots, such that a factor can be deduced. Classically, Schnorr’s method finds these short vectors using lattice reduction (like the LLL algorithm). The Chinese team turbocharged this step by inserting a quantum routine: they formulated finding a short vector as an optimization problem and then used QAOA (Quantum Approximate Optimization Algorithm) as a subroutine to solve it. QAOA is a variational algorithm that runs a shallow quantum circuit (with parameters that are tuned by a classical computer) to approach the solution of combinatorial problems. In this case, each QAOA iteration helped find what the paper calls an “sr-pair” (short relation pair) – essentially a clue that brings you closer to the factors. With enough of these clues, the factors can be resolved.

Hardware – Zuchongzhi Superconducting Chip: The experiment was conducted on a 10-qubit section of a superconducting quantum processor. The paper didn’t explicitly name the device in the excerpt, but China’s superconducting qubit efforts (e.g., the Zuchongzhi processor) are well known. By 2022, 10 high-quality qubits were within reach – their coherence and gate fidelity would have been carefully calibrated for the QAOA routine. Since QAOA circuits are relatively shallow (especially compared to full Shor’s algorithm), they are a good fit for NISQ-era hardware. Still, the challenge is that one must run the QAOA many times to find a satisfactory solution with high probability. The researchers reported obtaining the correct factors after a certain number of runs, demonstrating a clear peak in the quantum measurements corresponding to the solution.

Result Verification: How do you verify that a quantum computer has factored a number? Simple: once the algorithm outputs candidate factors (in this case, the output would be two 24-bit numbers), you multiply them on a classical computer and check if the product matches the original 48-bit integer. The team found that $$261,980,999,226,229 = 15,538,213 \times 16,860,433$$, successfully recovering the two prime factors. This was a straightforward verification, as multiplying two 24-bit numbers is trivial for a classical computer (and indeed for a person with a calculator). The importance, however, is that the quantum processor actually contributed essential information to find those factors – it wasn’t just fed the answer. According to the paper, this 48-bit factorization “refreshes the largest integer factored by a general [quantum] method in a real quantum device to date.” In other words, it set a new quantum factoring record.

It’s worth emphasizing that this record-setting status refers to a general method on a real quantum system. There have been larger numbers factored by special-purpose algorithms using some quantum operations (for instance, the 41-bit example above, which had heavy classical pre-processing, or some quantum annealing cases), but the 48-bit experiment is arguably the most substantial number where the quantum computer played a central role in the factoring procedure without trivially brute-forcing a small search space. The term “general method” implies that, unlike some highly tailored demonstrations (which only work for specific forms of numbers), the lattice/QAOA approach could, in principle, be applied to any number – given enough resources.

Claimed Implications – (Very) Optimistic Outlook: Perhaps the most attention-grabbing claim in the Chinese researchers’ paper was that their algorithm might scale to threaten real cryptography with far fewer qubits than Shor’s algorithm would require. In fact, they speculated – again, in theory – that 372 physical qubits could be enough to factor a 2048-bit RSA key using the SQIF approach. This figure caused a mix of excitement and skepticism in the community. If it were true, it would mean a feasible quantum computer (a few hundred high-quality qubits is not that far off, as companies are already building chips of that scale) might crack today’s RSA-2048, essentially bringing the timeline for breaking encryption much closer. However, this claim comes with many caveats. The paper was a preprint (meaning it had not undergone peer review at the time of its announcement), and experts quickly pointed out that even if only 372 qubits are needed, those qubits would likely need to run a very long and precise sequence of operations that current devices cannot manage. Moreover, the optimistic resource count assumes the algorithm’s scaling behaves kindly as one jumps from 48-bit to 2048-bit inputs – something far from guaranteed.

Reception and Skepticism: The broader cryptography and quantum computing community greeted the 48-bit factoring result with both intrigue and healthy skepticism. Personally, I found no real evidence of this approach outperforming classical optimization. It’s possible that a classical computer running a similar heuristic could have also factored that 48-bit number without needing quantum magic – we just don’t know, because the advantage of QAOA remains unproven in practice. The paper’s own concluding sentence tempered the claims, stating, “Besides, the quantum speedup is unknown, it is still a long way to break RSA quantumly.” In other words, even the inventors of SQIF acknowledge that RSA-2048 isn’t falling tomorrow and that they haven’t mathematically shown their method scales efficiently with n.

Importantly, this does not diminish the achievement of factoring a 48-bit number with a quantum device – that is a real experimental milestone, showing how far quantum hardware and algorithm ingenuity have come. But it means we should view the implications with caution. The result should motivate further research, but not panic in the streets about immediate cryptographic doom.

(Note: As of this writing, the 48-bit SQIF factorization is a reported result in a preprint, not yet verified by independent teams or published via peer review. This is common in fast-moving fields – new findings often appear on arXiv first. The usual disclaimer applies: while there is no obvious reason to doubt the experimental result, the broader claims about scalability need careful scrutiny. The peer-review process and follow-up works in the coming years will likely refine our understanding of this approach.)

Verifying Quantum Factoring Results and the Role of Classical Assistance

It’s instructive to discuss how such hybrid quantum-classical factoring actually plays out in practice, and how one verifies the outcome:

When a quantum computer factors a number (whether via Shor’s algorithm or a hybrid method), the final proof that it worked is typically a set of candidate factors that can be quickly checked by multiplication. Unlike some problems (e.g. verifying a huge prime number) where checking an answer is nontrivial, factoring has the nice property that verification is easy: multiply the factors and see if you get the original number. In the 48-bit example, after running their QAOA routines, the researchers would have obtained results (probably from measuring some qubit registers) that suggested the two factors 15,538,213 and 16,860,433. They then multiplied these together on a classical computer and indeed got 261,980,999,226,229, confirming success.

However, getting to that point often involves significant classical computation around the quantum loop. In SQIF, for instance, each iteration of QAOA had to be guided by a classical optimizer that adjusted the quantum circuit’s parameters to improve the odds of finding the solution. The quantum processor might run a million short circuits and feed results to the classical side, which then says “tweak this angle and try again.” This hybrid workflow can take hours or days, even for a 48-bit number, depending on the efficiency of the search. Similarly, in the variational factoring of the 41-bit number, 24 iterations of classical preprocessing were applied to simplify the problem before the quantum stage. We can think of these as classical filter steps that reduce the search space (e.g., eliminating many possible factor combinations via logical deductions), so that the quantum part doesn’t have an astronomically large task.

The interplay means that declaring “a quantum computer factored X” often comes with an asterisk that some of the heavy lifting was classical. This is not unexpected – quantum computers are still small and noisy, so we use them for what they’re potentially better at (like exploring many combinations in superposition or tunneling through an energy landscape), and use classical algorithms for the rest. As quantum hardware improves, the balance of what’s done quantumly vs classically can shift toward the former.

In the case of fully implemented Shor’s algorithm on a future fault-tolerant machine, one wouldn’t need all these variational parameter tweaks or lattice reductions; the quantum computer would just crunch through the modular exponentiation and Fourier transform in one go, and spit out the factors with a certain probability. But until such machines exist, these hybrid schemes serve as important stepping stones – they allow us to use early quantum machines today to solve toy versions of real problems and learn where the pain points are.

What Would It Take to Break RSA-2048? (Quantum Resources vs RSA Challenge)

The obvious question on every cybersecurity expert’s mind is: How does this 48-bit experiment compare to the task of factoring a 2048-bit RSA modulus? In other words, how much more advanced do our quantum computers need to be to actually threaten something like RSA-2048, which secures much of the internet’s sensitive communications?

The gap is enormous. RSA-2048 refers to an integer roughly $$N \approx 3 \times 10^{616}$$ (a 2048-bit number, about 617 decimal digits long) which is a product of two 1024-bit prime numbers. Factoring such an N is utterly infeasible for classical computers – that’s the whole point of RSA’s security. Shor’s algorithm in principle could factor it in polynomial time, but the polynomial’s constants and exponents are nontrivial at that size, and the algorithm would need to run on a fault-tolerant quantum computer with a huge number of qubits and operations.

How huge? Recent estimates, accounting for error-correction overhead and realistic device operation speeds, suggest something on the order of millions of physical qubits running for several hours would be required. For example, in 2021 Craig Gidney and Martin Ekerå published a detailed resource estimation for factoring 2048-bit RSA using Shor’s algorithm with state-of-the-art optimizations. They concluded that it would take roughly 20 million noisy physical qubits about 8 hours to get the job done. In terms of error-corrected logical qubits, their scheme used around $$3n \approx 3 \times 2048 \approx 6144$$ logical qubits (where n is the number of bits of the RSA modulus) plus some overhead. The number of quantum gate operations (particularly the non-Clifford Toffoli gates which dominate Shor’s algorithm cost) was on the order of $$0.3 n^3 \approx 0.3 \times (2048)^3 \sim 2.6 \times 10^{9}$$ Toffolis, i.e., a few billion quantum operations, which after adding error-correction overhead could balloon further. In short, you’d need a truly large-scale quantum computer: something like a 6144-qubit error-corrected quantum CPU (which itself might require tens of millions of physical qubits in a superconducting or ion-trap architecture) chugging through billions of gate operations. By contrast, today’s largest experimental quantum processors have on the order of 100 physical qubits or so, none of which are error-corrected. We are many breakthroughs away from the scale of machine required.

Even the 372 physical qubits figure suggested by the Chinese SQIF paper for RSA-2048 should be taken with a heap of salt. It sounds minuscule relative to 20 million, but there’s a catch: even if only 372 physical qubits were needed, the algorithm would likely need to run a very deep circuit or many repetitions to succeed, which brings back the necessity of error correction and/or far better qubit quality than we have now. The current 10-qubit demonstration took a number in the trillions; scaling that approach up by a factor of $$2^{2000}$$ in complexity is staggering. The authors’ estimate of 372 qubits is probably based on optimistic assumptions that each qubit can be reused many times in sequence (or that the problem structure reduces the qubit count dramatically). In practice, noise would accumulate unless those qubits are error-corrected or the coherence times are enormously improved.

To put things in perspective: 48-bit factoring vs 2048-bit factoring is like scaling Mount Everest vs going to the Moon. A 48-bit number is so small that classical computers solve it in a blink of an eye (even a simple trial division would polish it off quickly). We celebrate it in the quantum world only because our quantum devices are still so limited. A 2048-bit number, on the other hand, is astronomically larger – no classical computer on Earth could factor it in the age of the universe using brute force, and even the best classical algorithms (like General Number Field Sieve) would take billions of years on current hardware. Quantum algorithms reduce that daunting task to something feasible in principle, but the in-principle machine is beyond what we can build today.

So, how close are we to factoring RSA-2048 on a fault-tolerant quantum computer? The honest answer: we are still very far. The 48-bit result is a tiny step along the way, one of many that will be needed. If we measure progress by number size on a logarithmic scale, going from 4-5 bit factors (circa 2001) to 48-bit (2022) is progress, but the jump to 2048-bit is several orders of magnitude more. It’s likely to take not just incremental improvements but new paradigms in quantum error correction, hardware engineering, and perhaps algorithmic innovation to bridge that gap. Many researchers estimate we are at least a decade (or more) away from having a quantum computer capable of factoring RSA-2048, and that’s if progress is steady. Some are more pessimistic, given the engineering challenges, thinking it could be several decades. On the other hand, optimists point to the accelerating pace of quantum technology and say it’s not impossible that unforeseen breakthroughs (or even a clever algorithmic trick) could surprise us earlier. For safety, the cryptography community is actively working on transitioning to post-quantum cryptography well before a quantum attacker is available.

The Significance of the 48-Bit Demo and Its Context in Cryptographic Risk

From a pure security standpoint, factoring a 48-bit number with a quantum computer does not pose any threat to real-world cryptosystems. 48-bit RSA was breakable in microseconds on a PC in the 1970s; today, even 1024-bit RSA (which was long considered secure) has been factored by classical distributed computing projects. Modern security standards moved to 2048-bit keys precisely because those are out of reach. So no one is using 48-bit keys for security – that would be laughably insecure. Thus, the 48-bit quantum factoring result has no direct impact on security of communications. Where its impact lies is in the demonstration of technique and a checkpoint in the race between quantum capability and cryptographic defenses.

For the quantum computing field, achieving this with 10 qubits is a valuable proof that hybrid algorithms can scale a bit better than plain Shor in the NISQ era. It suggests that we should continue exploring alternative quantum algorithms that work in concert with classical methods – perhaps they can reach  bit numbers next, and so on. Each additional bit-length we can factor quantumly is an indicator of progress in hardware and algorithms.

For the cybersecurity community, the result is a reminder that Y2Q (the “Year of Quantum”, when quantum computers can break encryption) might eventually arrive, and we need to be prepared well before then. It injects some urgency into efforts to deploy post-quantum cryptography (PQC). Even though 48 bits is no danger, the fact that academic teams are actively expanding that capability means the once-theoretical threat is becoming increasingly concrete. We don’t want to still be using RSA-2048 when quantum computers finally get capable enough to challenge 2048-bit keys. The smart approach is to migrate to quantum-resistant algorithms (like lattice-based, code-based, or multivariate cryptosystems that are believed to resist quantum attacks) before it’s too late. Remember that agencies can store encrypted data now and decrypt later when they have the means – so a secret stolen today could be decoded in 10-15 years if we haven’t switched away from RSA by then and if a quantum computer comes online. National security agencies are acutely aware of this, which is why NIST has been running a standardization process for PQC algorithms.

Broader Implications and Questions at the Intersection of Fields

One particularly intriguing aspect of the Chinese SQIF algorithm is how it bridges two domains: the problem of factoring integers and lattice-based cryptography. By using lattice reduction (the same kind of math underlying some leading PQC schemes) as a tool to factor numbers, the researchers have connected fields that are usually considered separate. This raises some thought-provoking questions and implications:

Could Lattice-Based Cryptosystems Be at Risk? The fact that factoring was translated into a lattice problem (CVP) and then solved with the help of a quantum algorithm might make one wonder: if a quantum approach can solve the structured lattice problem arising from factoring, could similar quantum algorithms solve random lattice problems that underpin post-quantum encryption? Today’s lattice-based encryption schemes rely on the assumption that problems like Learning With Errors (LWE) or general CVP are intractable even for quantum computers. The SQIF work does not directly undermine that assumption – factoring via lattice is a very special case, exploiting the algebraic structure of factoring and known relations. However, it serves as a reminder that as quantum algorithms develop, researchers will be probing the hardness of lattice problems too. It’s a two-way street: PQC might be safe from known quantum algorithms, but new algorithms could emerge from left field. The interplay seen here suggests more cross-pollination between cryptanalysis and quantum algorithm design. It will be important to scrutinize lattice-based PQC with the same vigor given to RSA and elliptic-curve systems. (In short: SQIF doesn’t break lattice cryptography, but it does use lattice cryptography’s tools to break RSA – a clever twist. This kind of interdisciplinary technique might inspire future attacks or defenses.)

The Power of Hybrid Algorithms: The success of variational and hybrid quantum algorithms in pushing the factoring record upward hints at their potential in other areas. If combining classical compute with a quantum module can solve factoring up to 48 bits, what about other tough problems? We should consider what other applications could benefit from this hybrid approach. Optimization problems in logistics, machine learning model training, protein folding predictions – these are all being explored with hybrid quantum algorithms. The variational principle and QAOA used for factoring is very general; it could be adapted to many combinatorial problems. One takeaway is that even before we have full-blown error-corrected quantum computers, these NISQ-era algorithms might yield practical speedups in domains like supply chain optimization or drug discovery. The jury is still out – as noted, QAOA hasn’t shown a decisive speedup over classical algorithms yet – but every experimental success (like factoring) provides more insight into how to tune and deploy these algorithms.

Scaling Behavior and Error Correction: Another question is how these approaches scale and whether error-corrected qubits will be needed sooner rather than later. The 48-bit experiment worked on hardware without error correction by keeping circuits short. But as we attempt, say, 64-bit or 128-bit numbers, the required quantum circuit depth (number of sequential operations) will grow. At some point, errors will overwhelm the computation unless we incorporate error correction or improve hardware fidelity dramatically. So, a key research question is: where is the crossover point at which we’ll need to implement quantum error correction for factoring? If, hypothetically, one could factor a 100-bit number on a noisy device with a clever hybrid method, that might be done on NISQ hardware. But 200-bit or 500-bit might be beyond the noise threshold. Understanding this will help chart a roadmap: how many physical qubits with what error rates are needed to meaningfully threaten a 256-bit RSA key? (Not used for RSA typically, but 256-bit could be an interesting interim challenge.) It’s likely that well before 1024-bit, we will need error correction, which loops the discussion back to requiring many more qubits.

Finally, it’s worth reflecting on just how multi-disciplinary breakthroughs like this 48-bit factorization are. They sit at the intersection of quantum physics, computer science, and mathematics. Progress required improving superconducting qubit technology, inventing new quantum algorithms, leveraging classical optimization theory, and understanding number theory and lattices. For readers and researchers, this is a reminder that sometimes the next leap will come from connecting dots between fields: a cryptographic insight might inspire a quantum algorithm, or a physics technique might change what computational problems are feasible. The story of quantum factoring so far has been exactly that – an interplay between math and physics (Shor’s pure math algorithm needed the physics of entanglement to shine), and now between classical and quantum algorithms (variational methods using classical guidance to steer quantum searches).

Conclusion

In summary, the largest number factored using a quantum computer to date is a 48-bit, 15-digit integer, achieved via a hybrid quantum-classical algorithm on a 10-qubit superconducting device. This feat showcases the rapid progress in quantum computing – just a few years ago, 21 was the largest quantum-factored number, and now we’ve more than doubled the bit-length. The algorithm (a lattice-based QAOA approach rather than pure Shor) and the hardware (NISQ-era qubits) highlight how clever techniques can stretch the power of current quantum machines. The result was verified by the mundane act of multiplying the found factors to check the answer, and classical computation was an indispensable partner in guiding the quantum search.

When we compare this accomplishment to what’s needed to crack RSA-2048, however, we see just how much farther we have to go. Factoring a 2048-bit number is in a different universe of complexity, requiring thousands of high-quality qubits and billions of operations – a capability that will likely require years of additional scientific and engineering breakthroughs. The current milestone, while remarkable for quantum computing, does not change the security status of RSA or other cryptographic systems. It does, however, add momentum to the quantum computing race and offers valuable lessons for the next milestones.

For those in the cybersecurity community, achievements like the 48-bit factorization are a signal to keep watching the horizon. No need for alarm – RSA-2048 is still safe for now – but the wise move is to prepare for a post-quantum world before the storm arrives. The fact that researchers are already trying (and somewhat succeeding) to factor numbers in non-traditional ways (using lattice problems and hybrid algorithms) should encourage a transition to quantum-resistant cryptography sooner rather than later.

Quantum computing enthusiasts, on the other hand, can celebrate this as a win that combined ingenuity in algorithms with advances in hardware. It’s a step toward usefulness. The techniques developed here might find use in other applications that matter even before we ever get to cracking RSA. And for the broader scientific audience, it’s a fascinating case study of how ideas from disparate areas – from Schnorr’s lattice reduction to QAOA from quantum computing – came together to make something new happen. We can expect more such cross-disciplinary innovation as the field pushes forward.

In conclusion, the record will likely continue to be broken: 48-bit today, perhaps 60-bit or 100-bit in a few years with improved hybrid methods, and eventually, with fault-tolerant quantum computers, we’ll approach the capability to factor truly large numbers. Each milestone both validates the promise of quantum computing and underscores the challenges that remain. Breaking RSA-2048 with a quantum computer is not around the corner, but it is on the horizon. The prudent course is to assume that day will come and to prepare our cryptographic infrastructure accordingly, while we cheer on the scientists and engineers who are turning the quantum factoring dream into reality one qubit at a time.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven professional services firm dedicated to helping organizations unlock the transformative power of quantum technologies. Alongside leading its specialized service, Secure Quantum (SecureQuantum.com)—focused on quantum resilience and post-quantum cryptography—I also invest in cutting-edge quantum ventures through Quantum.Partners. Currently, I’m completing a PhD in Quantum Computing and authoring an upcoming book “Practical Quantum Resistance” (QuantumResistance.com) while regularly sharing news and insights on quantum computing and quantum security at PostQuantum.com. I’m primarily a cybersecurity and tech risk expert with more than three decades of experience, particularly in critical infrastructure cyber protection. That focus drew me into quantum computing in the early 2000s, and I’ve been captivated by its opportunities and risks ever since. So my experience in quantum tech stretches back decades, having previously founded Boston Photonics and PQ Defense where I engaged in quantum-related R&D well before the field’s mainstream emergence. Today, with quantum computing finally on the horizon, I’ve returned to a 100% focus on quantum technology and its associated risks—drawing on my quantum and AI background, decades of cybersecurity expertise, and experience overseeing major technology transformations—all to help organizations and nations safeguard themselves against quantum threats and capitalize on quantum-driven opportunities.
Share via
Copy link
Powered by Social Snap