CRQC Readiness Benchmark – Benchmarking Quantum Computers on the Path to Breaking RSA-2048
Table of Contents
Quantum computing is racing toward cryptographically relevant quantum computers (CRQC) – meaning quantum machines powerful enough to break present-day encryption (the feared “Q-Day” when RSA-2048 finally falls). Measuring progress toward this goal has been a topic of intense debates for decades. It is not as simple as counting qubits. A quantum computer’s crypto-breaking ability hinges on multiple factors – not just the quantity of qubits, but also their quality (error rates and coherence times), the overhead of error correction, and even how efficient our algorithms are at using those qubits.
Crucially, this cryptanalytic power doesn’t necessarily track with other quantum computing milestones. For example, some quantum processors have already achieved quantum advantage in specialized tasks. Google’s Sycamore chip famously solved a contrived problem faster than any classical supercomputer could, a milestone touted as “quantum supremacy.” Yet those feats don’t equate to progress in breaking encryption. A noisy intermediate-scale quantum (NISQ) device might excel at a niche calculation, but it remains far from cracking RSA or other cryptography. In other words, conventional benchmarks and metrics (qubit counts, gate fidelity, Quantum Volume, etc.) don’t directly indicate how close we are to an encryption-cracking machine. I believe we need a more crypto-centric way to gauge progress toward Q-Day.
With this post I would like to invite the community to help come up with a CRQC-focused benchmark. I’ll summarize how quantum computers are benchmarked today and why those might not be the best measures for cryptographic targets. I’ll propose a new approach to track progress toward a CRQC, let’s call it provisionally a “CRQC Readiness Benchmark”, which is focused specifically on a quantum machine’s ability to break cryptography. Using this CRQC-oriented yardstick, we’ll discuss what current breakthroughs suggest about when Q-Day might arrive. I welcome input and perspectives from others on refining this approach.
Limitations of Naive Metrics: Beyond Qubit Count
It’s tempting to measure a quantum computer by the headline number of qubits. After all, Shor’s algorithm for factoring an N-bit RSA key naively needs on the order of 2N+3 qubits. But raw qubit count is a poor proxy for cryptographic capability. Today’s qubits are highly error-prone and short-lived; quality trumps quantity. A small number of logical, error-corrected qubits can outperform many noisy physical qubits. For example, IonQ’s trapped-ion systems had only a few dozen qubits when their high fidelity allowed them run much deeper circuits than some superconducting chips with 100+ qubits. In short, 5 high-quality qubits can be more valuable than 50 noisy ones.
Early on, IBM recognized this by defining three performance axes for quantum hardware :
- Scale: number of qubits (the raw count, akin to hardware size).
- Quality: how reliably those qubits perform, measured by composite metrics like Quantum Volume (QV).
- Speed: how fast the device can execute circuits, measured by CLOPS (circuit layer operations per second).
Together, these give a more holistic yardstick than qubit count alone. Let’s briefly review a few of the standard benchmarks and why they only partially reflect progress toward breaking RSA:
Quantum Volume (QV)
A single-number benchmark introduced by IBM, QV measures the largest random circuit of equal width and depth that a quantum computer can successfully execute. A device with QV = $$2^n$$ can handle an n-qubit random circuit of depth n with reasonable fidelity. QV grows as both qubit count and gate fidelity improve. It’s a useful composite metric – IBM’s QV has doubled many times (their 127-qubit Eagle chip reached QV 128, then Osprey hit QV 512, etc.)
But QV uses random circuits, which may not capture the structure of cryptographic algorithms. A machine might have high QV yet still be far from running the extremely long, structured circuits required for RSA factoring.
Algorithmic Qubits (#AQ)
A benchmark popularized by IonQ, #AQ reports the largest problem size (qubit count in an algorithm) for which the computer can run a suite of reference algorithms with acceptable results. In essence, it asks “how many usable qubits do we have for real workloads?” If a device has #AQ = 25, it means you can reliably run algorithms on 25 qubits (even if the hardware has more physical qubits). When IonQ demonstrated #AQ = 29 on their 32-qubit Forte system, it meant it could handle circuits on 29 qubits with depth up to ~800 two-qubit gates before results degraded. This metric explicitly combines qubit count and gate fidelity. It’s more application-oriented than QV – in fact IonQ argues #AQ “measures what matters most” since qubit count alone “is far from a true measure of computational power”.
Still, #AQ benchmarks typically focus on near-term algorithms (quantum chemistry, optimization, etc.), not cryptanalysis. A machine might boast #AQ = 30 for chemistry problems, yet factoring a 2048-bit number is a very different challenge.
Random Circuit Sampling (RCS)
This is the benchmark behind quantum supremacy experiments. RCS pushes a device to run large random circuits and checks if its output distribution has the quantum correlations expected (via cross-entropy fidelity). Google’s 53-qubit Sycamore famously did this in 2019, performing a 53-qubit, depth-20 random circuit that would take classical supercomputers thousands of years to simulate. RCS essentially stress-tests the raw horsepower of a quantum processor – if you can sustain an entangled 50+ qubit state even for a brief moment, you’re beyond classical reach.
However, as a benchmark for CRQC, RCS is indirect. It uses unstructured random operations, whereas factoring uses highly structured arithmetic circuits. Passing an RCS test (quantum supremacy) doesn’t guarantee the machine can run Shor’s algorithm for meaningful sizes – different circuits stress different aspects (connectivity, memory, etc.) RCS is more of a scientific milestone (proving quantum computers can outdo classical at something) than a yardstick for breaking encryption.
Error Rates and Randomized Benchmarking
At a lower level, quantum labs characterize performance by gate error rates (e.g. via randomized benchmarking sequences). For instance, single- and two-qubit gate fidelities are measured to see if they beat the rough error-correction threshold ~1e-3 – 1e-4. Recent records are impressive: in 2025, an Oxford Ion Trap experiment achieved single-qubit gate errors of ~$$10^{-7}$$ (0.0001%), a gate fidelity virtually unprecedented. Such fidelities hint that each logical qubit might soon require only a few hundred physical qubits instead of thousands, since error rates are so low. Error metrics are critical – after all, a CRQC depends on suppressing errors faster than they accumulate.
However, focusing on individual error rates in isolation can be misleading for system-level progress. What matters is the combined ability to execute a long logical circuit. A device might have 99.9% gate fidelity but if it has only 10 qubits or very short coherence time, it’s still nowhere near factoring large numbers.
Throughput Measures (CLOPS, QOPS)
IBM introduced CLOPS (Circuit Layer Operations Per Second) to measure how many layers of gates a machine can apply per second, including control software overhead. This is basically a speed benchmark. More germane to CRQC is a concept from Microsoft: rQOPS, or “reliable Quantum Operations Per Second.” This forward-looking metric asks: how many error-corrected logical operations can the computer perform per second? Today’s NISQ devices have effectively zero rQOPS for deep circuits (any long sequence fails). But in a future fault-tolerant machine, we might speak of, say, $$10^6$$ reliable ops/second. Why is this important? Because breaking RSA-2048 will likely require on the order of $$10^{12}$$ quantum operations in total. If your CRQC can execute, say, 1 million error-free ops per second, then in principle it could finish ~$$10^{12}$$ ops in about $$10^6$$ seconds (~11.6 days). If it can do 10 million ops/sec, that’s under 2 days. This throughput to solution is ultimately what counts for cryptanalysis. I will revisit rQOPS as a key component of my proposed benchmark – essentially it’s the quantum analog of FLOPS for supercomputers, and a single number that combines scale, speed, and error resilience.
The Takeaway
Existing benchmarks (QV, #AQ, RCS, error rates, etc.) each illuminate one facet of performance. But none alone fully captures “how close are we to factoring RSA-2048?” Qubit count ignores quality; QV and #AQ capture quality but are tested on easier tasks; RCS tests raw power but on random circuits; gate errors tell nothing about system integration; CLOPS/rQOPS speak to speed but not whether the algorithm fits. To gauge progress toward a CRQC, we need to synthesize these dimensions.
What Does a Cryptographically Relevant Quantum Computer Require?
To benchmark progress toward breaking RSA-2048, we should clarify the target. A CRQC in this context means a quantum computer that can factor a 2048-bit RSA modulus (or break an equivalent crypto system) within a practical timeframe (say days or weeks). Shor’s prime factoring algorithm is the known quantum method to do this. What resources does that demand? Broadly:
Sufficient Logical Qubits
The algorithm needs qubits to represent large integers and perform arithmetic. Early estimates suggested you needed at least 2n qubits to factor an n-bit number (one register for the number, one for workspace). Indeed, until recently it was believed ~4096 logical qubits would be the floor for RSA-2048. But algorithmic breakthroughs have slashed this. In 2024, researchers (Chevignard, Fouque, Schrottenloher) found a way to do modular exponentiation with far fewer qubits by allowing approximations, breaking the “one qubit per bit” bottleneck. Building on that, in May 2025 Google’s Craig Gidney showed RSA-2048 could be cracked with roughly 1,400 logical qubits running Shor’s algorithm variants. This is a stunning reduction from earlier estimates that needed thousands of logical qubits. In fact, Gidney’s new number is 20× lower than his own 2019 estimate (which was ~20 million physical qubits, translating to many thousands logical).
Bottom line: a CRQC likely needs on the order of $$10^3$$ logical qubits, not $$10^4$$ or $$10^5$$, thanks to smarter algorithms.
Low Logical Error Rates (Deep Circuit Depth)
Factoring is computationally intensive. The quantum circuit (especially the modular exponentiation part of Shor’s algorithm) can consist of billions of quantum gate operations that must be executed in sequence. Any error occurring in the middle could corrupt the result. Thus, each logical qubit must have an error probability well below $$10^{-12}$$ or so (the inverse of the number of operations). Achieving that requires not just physical qubit fidelity, but error correction codes that can suppress errors over long stretches. This is why the number of physical qubits per logical qubit (the error correction overhead) is so important. Historically, with physical gate errors around ~1e-3, you might need thousands of physical qubits to make one sufficiently stable logical qubit. But if physical error rates drop to 1e-4 or 1e-5, the overhead reduces dramatically. Trapped-ion systems, for example, routinely achieve two-qubit gate errors ~1e-3 or better and single-qubit errors ~1e-5; superconducting qubits are now around 1e-3 and improving, and experimental prototypes have hit 1e-7 for single-qubit gates. Using advanced codes (like IBM’s recently published quantum LDPC codes that cut overhead ~10×), experts anticipate hundreds of physical qubits per logical qubit instead of tens of thousands. In practice, this means a million-physical-qubit machine could support thousands of logical qubits – enough for RSA-2048. Progress on this front is evident: in late 2024, Google’s 105-qubit “Willow” chip demonstrated quantum error correction below the error threshold for the first time, showing that increasing code size (distance-5 to distance-7) actually reduced logical error rates exponentially. This was a key proof-of-concept that scaling up error correction will work.
Sustained Operation / Coherence Time
Factoring 2048-bit RSA might take on the order of $$10^8$$–$$10^9$$ quantum clock cycles if done in one shot (e.g. Gidney’s method estimates about a week of runtime). The quantum hardware must either run continuously for that long or be able to pause/quiesce qubits without losing their state. This is non-trivial – maintaining quantum coherence over days is impossible without full error correction. Hence, fault tolerance is absolutely required; we’re not going to break RSA-2048 on a noisy interim device by simply being fast. All the computation must be actively error-corrected on the fly. This ties back to the logical error rate point – as long as errors are kept below threshold and fresh syndromes are corrected, the computation can in principle run indefinitely. (There’s also a question of classical control: e.g. can the classical decoding keep up in real time with millions of syndrome measurements per second? This is being addressed by fast FPGA/ASIC decoders and was part of IBM’s recent advances.)
Significant Parallelization (for speed)
Although Shor’s algorithm is largely sequential (the quantum Fourier transform and period-finding have steps that depend on previous ones), a large quantum computer can parallelize subroutines and magic-state generation. For example, magic state factories to supply T-gates can run in parallel with the main computation. The more parallel logical operations a machine can do, the higher its effective rQOPS. A CRQC will harness hundreds or thousands of logical qubits not just for storage, but to prepare ancillas and error-correct continuously in parallel. This is why IBM’s and others’ roadmaps emphasize modular, multi-chip architectures – to operate many qubits and many gate operations concurrently.
To summarize, breaking RSA-2048 = (Order of $$10^3$$ logical qubits) + ($$10^{11}$$–$$10^{12}$$ logical gate operations) + (error correction sustaining $$10^{-12}$$ error rates over $$10^6$$–$$10^7$$ seconds of runtime). Any benchmark of progress must somehow capture all three: scale, stability, and speed.
I also developed a simple tool – CRQC Readiness Benchmark (Q‑Day Estimator) – to allow you to play with the four inputs: LQC (logical qubits), LOB (logical operations budget), QOT (logical ops/sec), and an annual growth factor; and then combine it into a Composite CRQC Readiness Score and a projected “Q‑Day” (when week‑scale factoring of RSA‑2048 becomes practical). My approach and the tool are intentionally focused on cryptographic breakability, not generic “quantum advantage,” so it may diverge from headline qubit counts.
Existing Benchmarks vs. CRQC Requirements
Let’s sanity-check where current technology is, relative to the above requirements, using existing metrics and recent data:
Logical Qubit Count
As of mid-2025, we have at best a handful of logical qubits in labs. For instance, Google’s 105-qubit demo effectively realized a couple of logical qubits (distance-5 and -7 surface codes) protected for milliseconds. IBM and Quantinuum have also demonstrated encoding and simple logic on small logical qubits. We are not yet at the point of performing calculations on tens of logical qubits, but the roadmaps are aggressive. IBM’s June 2025 roadmap update explicitly targets ≈200 logical qubits by 2029, with a path to >1,000 logical qubits in the early 2030s. In fact, IBM announced it will deliver a fault-tolerant 200-logical-qubit machine (codenamed “Quantum Starling”) by 2029. Google similarly aims for a useful error-corrected computer by ~2029. Trapped-ion companies (Quantinuum, IonQ) might achieve smaller numbers of logical qubits on similar timescales but could network them together (e.g. Quantinuum plans to link ion trap modules).
Bottom line: We expect to cross into the few-hundred logical qubit regime by the end of this decade, if plans hold. That’s already in the ballpark of the ~1000 logical qubits needed for RSA-2048, albeit on the lower end.
Physical Qubit Scaling
Different modalities have different raw qubit counts. Superconducting qubit platforms (IBM, Google) lead in raw qubit count (IBM has a 433-qubit chip today and plans 1000+ qubit chips in 2025–2026). Ion-trap systems have fewer (IonQ’s latest has 35+ ions, Quantinuum ~32 in H2), but each qubit is higher quality. Photonic approaches (PsiQuantum) claim they will leap straight to a million physical qubits (in a massively parallel photonic cluster state) to get enough logical qubits.
These physical counts alone don’t equal capability, but the trend is clear: hundreds of qubits now, thousands in a couple of years, maybe a million by ~2030. In fact, multiple companies are openly targeting the million-qubit scale as necessary for full error-corrected systems.
Gate Fidelity and Error Correction Progress
We’ve seen steady improvement in error rates. Many platforms now have average two-qubit gate errors around 0.1–1% ($$10^{-3}$$ to $$10^{-2}$$), and one-qubit errors an order of magnitude better. As mentioned, record experiments achieved $$10^{-4}$$–$$10^{-7}$$ range for single-qubit gates. To operate a CRQC, those physical error rates need to be comfortably below threshold (~$$10^{-3}$$ for surface code) and remain stable over hours of operation. By demonstrating distance-7 logical qubits that beat physical qubits’ error rates, Google showed we’re entering the regime where adding qubits actually helps. Similarly, Quantinuum reported in 2022 a repetition code that slightly extended qubit coherence by using 3 physical qubits as one logical – a baby step to break-even. In short, error correction is now working in practice in small codes. The next milestones will be logical qubits with error rates $$10^{-6}$$ or better and executing logical gate sequences. (Notably, IBM in 2023 showed a 127-qubit device could run entangled circuits of over 4,000 two-qubit gates with good fidelity – indicating respectable coherence and control even without full error correction.)
This progress in fidelity directly affects how many physical qubits are needed per logical qubit: recent results and new codes suggest perhaps ~100–1,000 physical qubits may suffice for one robust logical qubit, down from earlier estimates of ~10,000. That’s another way to benchmark advancement: physical-to-logical qubit ratio. Right now it’s essentially infinity (since we haven’t built a stable logical qubit beyond seconds), but by 2026-2027 we might start quoting “logical qubit achieved with 1000 physical” and then that ratio will drop further. When that ratio hits ~100:1, the total qubit counts needed for RSA factoring become much more feasible (100 physical per logical * 1000 logical = 100k physical qubits, which is within reach of projected hardware scale).
Algorithmic Efficiency
This is an often overlooked “benchmark.” The quantum algorithm to break RSA doesn’t stand still – researchers continuously optimize it. A major leap was Gidney & Ekerå’s 2019 work that cut the T-gate count and space needed (yielding the 20 million qubit, 8-hour estimate). And as noted, the 2025 follow-up cut qubit needs 20× at the cost of longer runtime. In 2023–2024, other proposals appeared to reduce the gates for modular arithmetic or use alternative factorization techniques; not all panned out (e.g. some controversial claims were ultimately flawed, like an experimental claim of factoring 48-bit RSA with only 10 qubits – which essentially offloaded work to classical post-processing). We should expect further algorithmic advances that either lower qubit requirements or shorten runtime. For example, one could imagine better phase estimation techniques, or more efficient error correction specifically tailored to Shor’s circuit.
Benchmarking algorithmic progress is tricky but crucial – it effectively shifts the goal posts closer. One way to quantify it is: minimum known logical qubits needed to factor 2048-bit RSA. That number was ~4096, then 2048, then ~1120 (from Ekerå’s 2017 result), and now ~1400. Similarly, the number of Toffoli (or T) gates needed is a metric – Gidney’s 2025 paper reports reducing the Toffoli gate count by over 100× relative to a 2024 approach. In essence, the “difficulty” of the RSA problem for quantum keeps dropping as better methods emerge. Any comprehensive CRQC benchmark should thus incorporate the latest algorithmic resource estimate as a baseline target.
So, how do we combine all this information into a single gauge of “how close are we to a crypto-breaking quantum computer”? We need to formulate a new benchmark or index that blends scale (logical qubits), quality (error rates → logical error per operation), and speed (operations per second), evaluated against the requirements of a reference cryptographic task (RSA-2048).
A Proposed “CRQC Readiness” Benchmark
To measure progress toward cryptographic breakthrough, I propose a benchmark centered on a specific challenge: factoring a large RSA key (2048-bit) within a reasonable time. The benchmark could be phrased as a question: If we dedicated a given quantum computer to factoring a 2048-bit RSA number, what is the estimated time to success? This “time-to-solution” is a directly meaningful metric for cryptographers and policymakers tracking the quantum threat. A shorter time implies we’re closer to Q-Day. We can derive this from more fundamental metrics:
1. Logical Qubit Capacity (LQC)
How many logical qubits can the system effectively support in a computation? This accounts for both the number of physical qubits and the overhead of error correction. For example, if a machine has 1,000 physical qubits and can operate a distance-X code, it might yield, say, 10 logical qubits. Another with 1,000 physical qubits but better coherence might yield 50 logical qubits. LQC is not just a count; it reflects if those logical qubits can all be used simultaneously in a circuit.
We could measure LQC by executing a large entangled algorithm (like a 10-logical-qubit error-corrected circuit) and seeing if it succeeds. In future, vendors will hopefully report “we have X logical qubits” as a milestone (IBM is already forecasting numbers). For RSA-2048, we roughly need LQC ~ 1400 (to hold necessary quantum registers and ancilla).
2. Logical Operations Budget (LOB)
How many logical gate operations (especially non-Clifford gates like Toffolis/T gates) can be executed reliably within the coherence/active time of the computer? This depends on logical error rates and fault-tolerant gate overhead. It effectively measures circuit depth feasible on the logical level. We might express this as “we can perform Y logical operations before the probability of a logical error exceeds Z%.” Microsoft’s rQOPS metric fits here – if a machine has, say, $$10^5$$ reliable ops per second and can run for $$10^6$$ seconds, it could do ~$$10^{11}$$ ops total. For RSA-2048, the estimated logical gate count is on the order of $$10^{11}$$–$$10^{12}$$ (depending on algorithm). Thus, we’d want LOB in that ballpark. If a computer’s LOB is $$10^8$$, it’s far short; if it’s $$10^{12}$$, it meets the bar.
We can benchmark LOB by running very deep circuits on the logical qubits (or using analyses from error-correcting code simulations). A simpler proxy is logical error rate per gate – e.g., if logical error per gate is $$10^{-12}$$, one could in principle run ~$$10^{12}$$ gates before error ~ 1. A key near-term goal is to demonstrate at least one logical qubit with error per operation < $$10^{-9}$$, then <$$10^{-12}$$.
3. Quantum Operations Throughput (QOT)
This is essentially rQOPS – how many logical operations can be done per second. It encapsulates the speed of clocks, parallelism, and any waiting times (like magic state generation latency). A higher QOT means you can trade time for fewer qubits or vice versa. For example, Gidney’s approach traded 8 hours with 20 million qubits for 1 week with <1 million qubits. A machine with limited parallelism might have a low QOT, stretching the runtime. Ideally, we want QOT such that the factoring can be done in days, not years. If factoring needs ~$$10^{12}$$ ops and your QOT is $$10^6$$ ops/sec, that’s ~$$10^6$$ seconds (~11 days) – acceptable. If QOT is $$10^4$$ ops/sec, that’s $$10^8$$ seconds (~3 years) – too slow to be practical (though an adversary with a stable machine might still attempt it). We can measure QOT by how fast the machine can implement layers of logical gates. IBM’s CLOPS gives some indication at the physical level; extrapolating to logical, IBM’s target of 100 million gates on 200 logical qubits (Starling) implies a certain throughput. Indeed, IBM said Starling will perform 20,000× more operations per second than today’s quantum systems. That hints at multi-million ops/sec regimes by 2029. We propose including QOT explicitly, because a CRQC that technically can run $$10^{12}$$ operations but takes 5 years to do so is not nearly as impactful as one that does it in a week.
With these components, we can define a CRQC Benchmark Score. One approach is to express it as a fraction of RSA-2048 factoring capability achieved. For example, Benchmark Score = (LQC / 1000) × (LOB / $$10^{12}$$) × (QOT / $$10^6$$), normalized such that a score of 1.0 means the machine can factor 2048-bit RSA in about one week. This is a rough composite (the formula would be refined with better modeling), but conceptually:
- A score of 0.1 might mean the system could factor a 1024-bit RSA number in similar time, or a 2048-bit number in 10× longer than a week (which might be too slow to be practical, but technically possible).
- A score of 1.0 means Q-Day capability is achieved (2048-bit breakable within days/weeks).
- A score >1 would mean it can break RSA-2048 faster than a week (e.g. score 2 ~ within 12 hours, etc.)
Such a composite index would combine the three key axes: amount of logical qubits, their quality (logical error rate/depth), and system speed. It would allow tracking progress year by year in a single figure – even if algorithmic targets shift, we recalibrate the index to the latest understanding of requirements.
Another, perhaps simpler, way: define an RSA Benchmark Problem at various sizes and see how the quantum computer fares. For instance, define tasks like factor a 20-bit RSA number, factor a 30-bit RSA number, etc., under certain constraints (no prior knowledge, using Shor’s algorithm or equivalent). As hardware improves, the maximal bit-length it can factor will increase. Today, we’re around 8-bit factoring (e.g. factoring 15 or 21) done with small demos. Eventually 32-bit, 64-bit, … up to 2048-bit. This is similar to classical RSA challenge records, except quantum won’t incrementally do 1000-bit before 2048 becomes possible (quantum will likely jump from trivial to very large once fault-tolerance is in place). Nonetheless, reporting “we factored a 48-bit semiprime on a quantum computer” would be big news and a concrete benchmark showing progress. Designing a standardized suite of quantum factoring challenges could incentivize teams to demonstrate capability on smaller instances as stepping stones to RSA-2048.
Yet another metric: measure how many bits of an RSA key can be factored per hour on the machine, or equivalently, how long it would take to break RSA-X for various X. As a hypothetical, maybe in 2028 a quantum computer might break RSA-512 (512-bit) in a week – that would be a huge warning sign that RSA-2048 is nearing within a couple more doublings of capacity.
The DARPA Quantum Benchmarking Initiative aligns with this thinking – DARPA is seeking application-specific benchmarks to measure when quantum computers reach “utility scale”. Cryptography is certainly one transformational application; a benchmark focused on cryptographic algorithms (factoring, discrete log) would quantify utility in that domain. In other words, instead of measuring abstract quantum volume, measure the time/resources to crack a standardized crypto problem. My CRQC readiness index is an attempt at that.
Estimating Q-Day with the New CRQC Readiness Benchmark
Using my proposed criteria, let’s assess the trajectory and estimate Q-Day, the day a quantum computer can break RSA-2048. I’ll integrate the latest available data (up to 2025) and roadmaps for the near future:
Logical Qubits (LQC) Trend
Right now (2025) LQC ~ 1 (no machine has a stable logical qubit that can run algorithms beyond a trivial length). By 2027, we expect first systems with double-digit logical qubits. IBM’s roadmap suggests ~ 50–100 logical qubits by 2027, and ~200 by 2029. Google similarly should be in that ballpark if their surface code scaling continues. IonQ and Quantinuum might demonstrate a few logical qubits with high fidelity in the second half of the decade (their approach may emphasize getting one logical qubit very robust, then a few, etc., given their higher physical fidelity).
By 2030, if multiple approaches succeed, we could see on the order of 1000 logical qubits available – whether in one machine or distributed (IBM explicitly mentions “1,000+ logical qubits in early 2030s”). That magic 1000 number meets the Shor algorithm requirement per Gidney’s paper. So in terms of LQC, around 2030 we hit the threshold needed for RSA-2048.
Logical Ops Budget (LOB) and Error Rates
IBM’s Starling system in 2029 is expected to execute 100 million quantum operations on 200 logical qubits without failure. That $$10^8$$ figure is the circuit depth per computation that the machine can handle on its logical qubits. Meanwhile the follow-on system Blue Jay (~2033) is aimed at 1 billion ops on 2000 logical qubits. These numbers are interestingly close to what’s needed for RSA: we estimated ~$$10^{11}$$–$$10^{12}$$ operations. Blue Jay’s 1 billion ($$10^9$$) is slightly below that, but could possibly factor smaller keys or break shorter ECC. With further error-correction improvements (or running the machine a bit longer than baseline), it might approach $$10^{10}$$–$$10^{11}$$ ops. Google hasn’t publicly put numbers on ops, but their focus on below-threshold error correction implies they are pushing logical error per gate toward the $$10^{-3}$$ and $$10^{-4}$$ range initially, aiming for $$10^{-6}$$ and beyond by late decade. By 2030, it’s conceivable that logical error rates of ~$$10^{-9}$$ per operation are achieved on surface code logical qubits (distance ~25-30 perhaps). If so, a sequence of $$10^9$$ gates would have ~1% chance of error – borderline but maybe manageable with repetition or additional error mitigation. It might require a tad more (distance-30 for $$10^{-12}$$ error, etc.) In any case, the error correction ability seems on track to support $$10^9$$–$$10^{11}$$ gate operations by ~2030-2032 across various platforms (IBM’s claims support this, and academic resource estimates do too). Notably, Gidney’s paper assumed physical error $$10^{-3}$$ uniformly and found <1e6 physical qubits suffice. If physical errors drop to $$10^{-4}$$ or $$10^{-5}$$ (which experiments suggest is happening), the physical qubit count for the same logical performance drops further, or the logical error improves, increasing LOB. So hardware fidelity gains directly push the LOB upward.
Projection: by 2030, a state-of-the-art machine may be able to run ~$$10^{11}$$ logical ops in a days-long run, which is in range of RSA-2048 factoring.
Quantum Operation Throughput (QOT)
Speed is where different modalities diverge. Superconducting qubits have fast gate speeds (10-100 nanoseconds) and can be clocked very rapidly – millions of cycles per second. Ion traps have slower gates (tens of microseconds), more like thousands of ops per second, though parallel operation and mode connectivity can compensate somewhat. Photonic qubits, if realized, could potentially operate at GHz speeds optically, but their challenge is creating strong interactions. For throughput, IBM’s strategy is clear: add parallelism and modular systems to boost the number of operations per second. Their claim of 20,000× more operations by 2029 than today’s machines is telling – today’s 127-qubit Eagle has QV 128 and CLOPS maybe a few thousand; 20,000× more suggests into the $$10^8$$–$$10^9$$ operations per second overall across the machine. If Starling has 200 logical qubits, each potentially doing gates in parallel, that could be on the order of $$10^6$$ logical ops/sec (just rough reasoning). Blue Jay with 2000 logical might push $$10^7$$–$$10^8$$ logical ops/sec if fully utilized. Meanwhile, IonQ’s approach might achieve fewer ops/sec (given slower gates) unless they find ways to parallelize heavily with many traps or use photonic interconnects. However, IonQ doesn’t necessarily need as high a clock rate if their error rates are extremely low – they could run longer. That said, an adversary will prefer the fastest machine available for code-breaking. Estimate: by 2030, top superconducting or photonic QCs could achieve on the order of $$10^6$$–$$10^7$$ reliable ops per second (rQOPS). At $$10^6$$ ops/sec, factoring ($$10^{11}$$–$$10^{12}$$ ops) takes a few days to a few weeks; at $$10^7$$ ops/sec, it’s under a day for $$10^{12}$$ ops. So somewhere in that window, RSA-2048 goes from “multi-week computation” to “hours”. My benchmark score of 1.0 was defined around $$10^6$$ ops/sec and $$10^{12}$$ ops total (a week). If IBM’s 2033 machine does 1e9 ops with 2000 logical (Blue Jay) , and perhaps runs at a couple million ops/sec, it might factor RSA-2048 in <1 day. In other words, by the early 2030s, if plans hold, we are firmly in CRQC territory in terms of speed too.
All signs point to the early-2030s as the crossover point.
Combining the factors: 2030 is essentially when the projected logical qubit count (~1000), logical operations capability (~$$10^{11}$$+), and throughput ($$10^6$$ ops/sec) converge to what’s needed for RSA-2048. It’s no coincidence that several informed predictions (including my previous Q-Day analysis) peg 2030 ± 2 years for Q-Day.
- By 2025: Benchmark score is still effectively 0 (no CRQC yet). But algorithmic score jumps (thanks to Gidney’s reduction of qubit needs), and we saw first demonstrations of below-threshold error correction (inching LOB upward). No change in qubits factored (still trivial 3×5=15 type demos), but the theoretical feasibility of factoring 2048 with <1M qubits is proven.
- 2026–2027: Expect benchmark score to rise into the 0.1 range. We might see a small RSA (say 32-bit or 64-bit) factored on a prototype fault-tolerant setup – just to prove end-to-end capability. Logical qubit count in practice might hit double digits. If someone factors, e.g., a 128-bit RSA number with a quantum hybrid system, that would be a watershed benchmark – showing sub-1000 qubit keys falling. Classical computers can already factor 512-bit with great effort, so the quantum advantage will be clearly demonstrated somewhere in this period on mid-size keys.
- 2028–2029: Score perhaps 0.3–0.6. Hardware with 100+ logical qubits, logical error rates ~1e-3 per cycle or better, and system throughput climbing. A 256-bit or 512-bit RSA key might fall to a quantum computer (in collaboration with classical post-processing) as a proof-of-concept. IBM’s 2029 machine comes online with 200 logical qubits and 100 million gate capacity – possibly enough to tackle ~512–1024 bit keys if pushed to its limits (though it might take many weeks or have a high error chance at those limits).
- 2030–2032: Score approaches 1.0. Logical qubit counts hit the requisite range (~1000), error-corrected operations cross the $$10^{11}$$ mark, and rQOPS is high enough. By around 2030-31, a well-funded adversary or national lab might successfully factor a 2048-bit RSA key – quietly if not publicly. At that point, Q-Day has effectively arrived. In public, we will likely see announcements of factoring increasingly large keys as a drumbeat: e.g., “World’s first 1024-bit RSA quantum factoring accomplished”, soon followed by the big prize. Keep in mind, governments might not announce when they achieve it – they could do it in secret. But from a benchmarking perspective, my index hitting ~1.0 is the alarm bell for the world to transition cryptography (ideally well before then!).
Is it just qubits and gates? It’s worth noting other practical factors: engineering challenges like I/O, cooling, crosstalk, and power consumption. A CRQC might be a massive machine consuming megawatts of power to keep thousands of qubits at millikelvin temperatures for weeks. My benchmark doesn’t directly include power or physical footprint, but these could slow progress if underestimated. However, the major players (IBM, Google, etc.) are already building infrastructure (IBM’s Poughkeepsie quantum data center, for example, to house these machines). DARPA’s benchmarking initiative explicitly asks if a useful quantum computer can be built “much faster than conventional predictions” – implying they are open to surprise leaps. I’ve baked in some conservative assumptions, but one should always consider: what if an undisclosed breakthrough (or a nation-state effort) accelerates things? My CRQC index would jump accordingly – say someone finds a way to get logical qubit error to $$10^{-15}$$ (making $$10^{12}$$ ops trivial), then the LOB component is instantly solved and Q-Day might jump closer.
Conclusion
Benchmarking quantum capabilities for cryptography is both critical and challenging. We can’t rely on any single metric like qubit count to tell us how near we are to breaking RSA-2048. A combination of logical qubit count, error-corrected circuit depth, and operational speed must reach certain thresholds in unison. Existing benchmarks – Quantum Volume, Algorithmic Qubits, etc. – each address parts of this, but a CRQC-specific yardstick brings them together. By focusing on a concrete goal (factoring a 2048-bit RSA key), I defined a composite measure of progress. This new benchmark suggests that, as of 2025, we are perhaps on the order of one-tenth of the way there (in capability), but the pace of improvement is accelerating. With major leaps in algorithm efficiency and hardware roadmaps aiming at fault-tolerant machines by 2029, we could plausibly hit the needed 1000 logical qubits and $$10^{12}$$ operations within ~5–7 years. That puts Q-Day around 2030 – aligning with the latest expert predictions that RSA-2048 will fall in the “very early 2030s” timeframe.
It’s important to continually update this benchmark as new data comes in. Each time a lab reports a better logical qubit or a company announces a 1000-qubit chip, we should ask: How does this move the needle toward cryptographic breakability? For now, the needle is steadily moving out of the lab and into the danger zone for our current cryptosystems. The prudent course for cybersecurity is clear: assume Q-Day is coming within the decade and transition to quantum-resistant cryptography well before my benchmark hits 1.0. By the time a quantum computer is actually announced to have cracked RSA-2048, it will be too late – my metric will have been nearing the threshold for a while. The role of a good benchmark is to provide early warning.
The proposed CRQC readiness index (or “Q-Day readiness score”) could serve as that early warning system for industry and government, synthesizing complex quantum engineering progress into a simple indicator. I have shown what factors must be tracked and how close they are to the critical levels. If nothing else, this exercise underlines that quantum progress should be measured in context – context of a specific, high-value capability like cryptanalysis. By measuring what really matters (breaking our encryption), we cut through hype and skepticism and focus on the remaining gap to be closed. As of now, that gap is measurable in years, not decades. Each new benchmark result – a higher quantum volume, a faster error decoder, a bigger chip – should be celebrated and scrutinized for how it reduces the time to Q-Day.