The Trouble with Quantum Computing and Q-Day Predictions
Table of Contents
Introduction
Quantum computing timelines are all over the map. Depending on whom you ask, a cryptographically relevant quantum computer (CRQC) capable of breaking RSA encryption might be here in few months – or not until the 2050s.
We’ve heard doom-laden warnings that Q-Day (the day a quantum computer cracks public-key crypto) could be “just a year or two away, or already here in secret labs,” while skeptics retort that it’s so far off as to effectively never happen. The truth lies somewhere between these extremes. Yet the sheer spread of forecasts, from imminent to half-a-century out, is creating chaos for businesses and policymakers. Worse, these divergent predictions often stem not from genuinely new data, but from flawed reasoning or selective modeling.
The Timeline Chaos
It’s no surprise that people are confused when CRQC forecasts span decades. Today you can find reputable sources suggesting a quantum breakthrough in the early 2030s (indeed, many expert surveys now coalesce around the 2030s as the likely timeframe), alongside other voices claiming we won’t need to worry until 2050+. This timeline chaos is dangerous. If decision-makers can justify any narrative, from “panic now” to “do nothing for 30 years”, simply by cherry-picking a forecast, how do we plan for the quantum threat?
What’s important to recognize is that this wide spread of predictions usually isn’t due to one group having secret knowledge that others lack. It’s often the same underlying data being interpreted in massively different ways. In other words, biases and bad assumptions are the culprits. Let’s dissect a few of the most common mistakes that lead to such a noisy prediction landscape.
Common Flaws in Forecasting
Forecasters, both in industry and academia, have fallen into several predictable traps when predicting Q-Day:
Linear or Static Extrapolation
A frequent mistake is straight-line extrapolation of past hardware progress, treating the modest qubit gains of the last decade as the norm for the future. This ignores the possibility (indeed, the intent) of exponential growth or sudden leaps (See “Neven’s Law“). For example, a model might simply project that because we went from ~50 qubits in 2017 to ~1000 qubits in 2027, we’ll only see a few thousand qubits by the 2040s – pushing any RSA-breaking machine to the late 2040s or 2050s. Such a cautious single-curve extrapolation may be evidence-based on past data, but it becomes outdated if the quantum industry hits an inflection point (e.g. breakthroughs in modular architectures or massive funding enabling faster scaling).
In reality, companies like IBM and IonQ are aiming far higher growth – IBM, for instance, has publicly outlined a path to a 100,000-qubit quantum supercomputer by 2033, and IonQ claims it can reach on the order of 2 million physical qubits by 2030. If even a fraction of those ambitious roadmaps comes true, the straight-line models will have vastly underestimated the pace. In short: assuming slow, linear progress can make timelines deceptively long.
Ignoring Algorithmic and QEC Breakthroughs
Another flaw is failing to incorporate the latest advances in quantum algorithms and error correction. Predicting Q-Day is not just about hardware; it’s about how efficiently we can use that hardware to break crypto.
Over the past few years, researchers have drastically slashed the resource requirements for tasks like factoring RSA. For example, the well-known 2019 optimization by Gidney & Ekerå showed how to factor 2048-bit RSA with around 6,000 logical qubits and 8 hours runtime, a huge improvement over naive Shor’s algorithm which would have needed far more qubits and time. More recently, Chevignard et al. (2023/2024) introduced techniques to process the RSA problem in smaller chunks, bringing the requirement down from many millions of physical qubits to under one million – at the cost of a longer runtime. And just this year, Craig Gidney (2025) published a breakthrough paper cutting qubit needs by an order of magnitude: factoring RSA-2048 in under a week with fewer than one million physical qubits, which corresponds to roughly 1,000-1,400 logical qubits when leveraging modern error-correction tricks.
At the same time, there is a steady drumbeat of QEC (quantum error correction) improvements – think of them as algorithmic ways to make quantum computers more reliable. Recently, physicists at the University of Oxford have set a new world record for quantum logic accuracy, achieving single-qubit gate error rates below $$1\times10^{-7}$$, meaning fidelities exceeding 99.99999%.
These algorithmic and QEC advances are game-changers – they moved the goalposts much closer. Yet, remarkably, some predictions out there blithely ignore them, still citing “we need 1 billion qubits” or using old data from years ago. Any forecast that doesn’t factor in the latest published optimizations is painting an overly rosy picture of safety. Quantum resource estimates from even 2-3 years back are likely obsolete. In forecasting, one must continuously update the model as researchers find ways to do more with fewer qubits; otherwise the timeline will be way off.
Unrealistic Definitions of “Broken”
There’s also a semantic trick that can skew predictions: how one defines a cryptosystem as “broken” by quantum. Some analyses only declare RSA “broken” when a quantum computer can crack it quickly (say, in hours or days), whereas others consider it broken as soon as it’s feasible at all, even if that first crack takes weeks or months. This nuance can lead to dramatically different timelines.
For instance, Gidney’s 2025 result implies a million-qubit machine could factor RSA-2048 in roughly 7 days. If you believe such a machine could exist by, say, 2035, then Q-Day could be ~2035 (with a one-week attack time). However, a more conservative prognosticator might insist that RSA isn’t “really” broken until that attack can be done in mere hours, which might not happen until hardware improves a generation or two beyond that point. In effect, they shift the goalpost: the first demonstrated break doesn’t count in their book; only a fast, highly practical break counts.
From a security perspective, this is a dangerous outlook – if an adversary can, say, crack your encryption in a week or even a month, that encryption is toast. But by using an unrealistically stringent criterion for “broken,” one can claim “RSA is safe until 2045 or 2050” while ignoring that the earlier feasibility of the attack (albeit slower) already means trouble.
It’s crucial to define Q-Day in a meaningful way: arguably, the day a quantum computer can factor RSA at all (within a timescale shorter than the data’s sensitivity lifetime) is the day RSA is effectively broken. Adding arbitrary time qualifiers (“broken means under 24 hours”) only lulls us into false comfort.
Cherry-Picked Stats and Surveys
Finally, a common sin is cherry-picking data points or expert opinions that support a desired timeline, without understanding context or consensus. We see reports that cite, say, a single pessimistic survey of scientists or an outdated statistic (e.g., “experts say only a 5% chance by 2030”) to argue there’s no urgency – ignoring the fact that most recent expert surveys actually put the likely Q-Day in the 2030s. Or someone will quote a tech CEO who claims quantum is decades away, without noting that many other industry leaders disagree.
Used selectively, you can find “evidence” for virtually any date you want. But that isn’t honest analysis. For instance, if one consultant’s report conveniently highlights only the most comforting estimates (perhaps to please a client who doesn’t want to spend on remediation), that’s cherry-picking.
Sound forecasting demands weighing the full range of credible data – and right now, the weight of evidence (error-correction milestones, algorithmic advances, hardware roadmaps, expert consensus) leans toward an earlier Q-Day, not the 2050s. In short, be very wary of predictions that lean on one or two data points in isolation. Often the authors have filtered the inputs to fit a narrative.
In summary, these flaws, naive extrapolation, ignoring progress, moving the definitional goalposts, and selective citation, explain much of the divergence in quantum timelines. Now let’s consider another factor: how human nature and incentives can color ostensibly “objective” predictions.
The Role of Bias and Incentives
It turns out predictions about quantum computing’s arrival date often reveal more about the predictor’s incentives than about physics. Two opposing biases frequently come into play:
1. Urgency and Hype (Fear Sells)
On one side, there are vendors, consultants, and even some researchers who benefit from urgency. If you’re selling a product or pushing an agenda (say, convincing governments to invest in quantum or in post-quantum security), a near-term quantum threat gets people moving. This can subconsciously tilt forecasts earlier.
We’ve all seen instances of subtle fear-mongering: hints that “maybe certain intelligence agencies know something we don’t,” or that a secret quantum breakthrough could happen any moment – all without hard evidence, but enough to sow fear.
Many quantum computing firms or cybersecurity consultants emphasize the soonest plausible Q-Day date in their talks, because a looming threat is a call to action (and, often, budget). To be clear, many of these folks are well-intentioned – early warning is helpful, and certainly nobody should be complacent. But we must acknowledge the bias: those sounding the alarm the loudest sometimes have a vested interest (financial or otherwise) in stoking that alarm. It doesn’t mean their data is wrong, but their interpretation might lean “worst-case scenario” to motivate urgency.
2. Comfort and Inertia (Delay Tactics)
On the other side, we have executives, IT managers, or even policy-makers who would rather not deal with the quantum threat right now. Adapting to quantum-safe cryptography can be expensive and complex; it’s tempting to kick the can down the road. These individuals might latch onto the longest timeline out there to justify inaction. If a consultant or report says “no need to worry until 2040 or later,” that can become the convenient gospel to avoid budget and effort today. Confirmation bias sets in: they’ll give more weight to any expert quote or study that pushes Q-Day further out.
A striking example of this was the reaction to the QTT demo: many busy executives saw a single demo chart suggesting “RSA-2048 safe until 2051” and immediately concluded “the experts say we have 25 years – so no rush”. This is dangerously misleading, because even if (and that’s a big if) 2051 were the expected date, the prudent approach is to act well before an expected cryptographic break. Unfortunately, human nature often seeks the path of least resistance – and a comfortable timeline gives cover to procrastinate. Whether it’s out of genuine belief or convenient excuse, bias toward optimistic (long) timelines can pervade corporate strategy, leaving systems vulnerable if reality comes sooner.
It’s worth noting that even academia isn’t free from bias. Researchers making forecasts may err on the side of caution (no one wants to cry wolf and be wrong), or conversely might over-emphasize their own breakthrough’s impact on the timeline. The key is to actively counterbalance these biases: when a prediction is given, ask “what incentive might this person or organization have to frame it this way?” Vendors hyping a nearer Q-Day, or officials downplaying the risk to avoid panic, both deserve a healthy dose of skepticism.
What Better Forecasting Looks Like
Are all predictions doomed to be either wrong or biased? Not necessarily. We can’t magically know the future, but we can demand more from our forecasting methods. A higher analytical standard is both possible and necessary. Here are some principles for better Q-Day forecasting that address the issues we’ve discussed:
Base Models on Current, Peer-Reviewed Research
The starting point for any forecast should be the best available scientific data on resource requirements. In practice, that means regularly updating your assumptions with the latest algorithmic and error-correction breakthroughs.
For example, as of mid-2025, a reasonable baseline is that an RSA-2048-breaking machine would need on the order of 1,000-1,400 logical qubits (per Gidney’s 2025 paper) running for about a week. That’s a far cry from the 20 million qubits we thought we needed just a few years ago. If your timeline model is still using 2019-era numbers (millions of qubits, etc.), it’s already outdated. Likewise, integrate recent QEC improvements – e.g. if we know a logical qubit might soon be achieved with hundreds (not thousands) of physical qubits thanks to record fidelities, plug that into the model.
Using up-to-date research inputs ensures you’re forecasting from today’s vantage point, not yesterday’s. It also gives you a clearer picture of what the remaining gaps are. Today’s estimates say breaking RSA-2048 might require <1 million physical qubits and a few days of runtime; tomorrow’s might lower it further. A good forecaster stays on top of this moving target.
Align with Hardware Roadmaps (But Treat Them as Scenarios)
The other half of the equation is hardware progress – how quickly will we get those necessary qubits? Here, it’s useful to look at the published roadmaps from leading quantum developers and national programs. IBM’s roadmap, for instance, envisions on the order of 1,000 logical qubits by the early 2030s (they aim for ~200 logical qubits by 2029 as an interim step). Government initiatives in the EU, US, and elsewhere similarly target the 2030-2035 window for having large-scale quantum systems or at least for completing migration to PQC (a strong indication of when they fear the threat will materialize).
Now, one shouldn’t take any single roadmap as gospel – industry projections can be overly optimistic. But they serve as concrete scenarios to model. At one extreme, suppose all the aggressive targets (IBM’s 100K qubits, IonQ’s millions, etc.) are hit on schedule – you’d better believe RSA falls as soon as those machines come online, potentially even late 2020s or ~2030. At the other extreme, if progress stalls significantly, maybe it’s closer to 2040.
A robust forecast will incorporate multiple scenarios: for example, a best case (fast progress) timeline, an expected case (moderate, hitting early 2030s), and a worst case (slow, pushing late 2030s or 2040). Each scenario should be explicitly tied to assumptions like “X logical qubits by year Y, error rate Z, algorithmic speedup factor K…”. This way, people can see why a given timeline would happen, and what changes would shift it. Rather than one single date pretending to be certain, it’s a range of possibilities with transparent reasoning.
Quantify and Document Assumptions
Transparency is paramount. Any prediction worth its salt must come with footnotes: How many qubits are assumed? What runtime is considered “practical”? What error rates, what algorithmic approach? And importantly, why were those chosen?
For instance, if a forecast assumes “10,000 physical qubits in 2030,” is that based on trendline extrapolation, a specific company’s promise, or just a guess? If it assumes no further algorithm improvements beyond 2025, that should be stated (and probably justified why none would occur). By quantifying assumptions, you also make it easier for others to tweak them and see the outcome (just like QTT allows). This turns predictions from edicts (“it’ll be 2040, trust us”) into models that can be tested and debated. It also forces forecasters to confront their own biases – if you find you had to cherry-pick an unrealistically low error rate just to push Q-Day past 2045, that should tell you something.
The community should favor forecasts that lay their cards on the table. A prediction that says “we assume N logical qubits by year Y based on these data” is inherently more credible than one that just says “experts believe X” with no further detail.
Present Findings as Ranges or Conditions, Not Certainties
Given all the uncertainty, the honest way to communicate Q-Day predictions is with humility and ranges. Instead of pronouncing “2040 is the year,” a better approach is: “If hardware and algorithms progress at X rate, we could see RSA broken by 2030; if they progress at only half that rate, maybe 2035-2040.” Provide the spectrum of outcomes and, if needed, assign confidence levels. This conveys that we are making educated guesses, not receiving prophecy from the oracle. It also helps counteract misinterpretation – an audience presented with a range (say “late 2020s to mid-2030s, with early 2030s most likely”) is less likely to seize on the single most convenient number.
There’s a psychological effect too: when people see the span of possible timelines, they appreciate that earlier Q-Day isn’t a wild outlier but a real possibility, and that later Q-Day isn’t a guarantee. We should always emphasize what we don’t know as much as what we think we know.
Continually Revise and Challenge the Model
Finally, good forecasting is an ongoing process. It’s not one report that sits on a shelf for 5 years. The field of quantum computing is moving fast; each new breakthrough should prompt us to revisit the timeline.
Forecast models should be updated frequently (certainly annually, if not quarterly) with new data. And the community should be encouraged to stress-test each other’s assumptions. For example, if someone forecasts “not until 2040,” others should ask “what about the recent paper that cut qubit requirements by 10x? If that’s included, does the timeline change?”
In other words, we need a culture of critical peer review for predictions – much like we have for technical research. Just as cryptographers scrutinize each other’s algorithms, we should scrutinize the models and assumptions behind Q-Day claims. By doing so, we inch closer to consensus and away from the current free-for-all of guesses.
In essence, better forecasting looks a lot less like a mystical art and more like scenario planning combined with scientific rigor. It acknowledges uncertainty but doesn’t hide behind it, instead, it explores it. Importantly, it also means communicating these forecasts responsibly. That could include using tools to show how changing assumptions shifts the timeline, thereby educating stakeholders on why a prediction is what it is. As I recommended in the QTT review, anyone making or consuming a timeline prediction should play with the assumptions: “plug in a faster development curve or the latest algorithmic advances and see how the projected date jumps earlier… This will give you a range of possible futures, not just the rosiest one”. If we foster this mindset, predictions will become more of a discussion (what if this, what if that?) rather than a one-way declaration.
Conclusion
The takeaway from all of this is a call to the community: we must hold ourselves to a higher analytical standard when predicting Q-Day. Quantum computing’s impact on security is too important for us to tolerate sloppy forecasts. Yes, we will always have uncertainty – no one can pinpoint the exact year RSA-2048 will be broken by quantum (and if someone claims they can, be skeptical!). But not knowing the exact year is no excuse for imprecise thinking. We can, and must, be much more disciplined in how we model, interpret, and communicate timelines.
That means challenging predictions that seem driven by hype or complacency. It means demanding to see the assumptions behind the claims. It means preferring forecasts that engage with the full breadth of current research – and that clearly admit their own margins of error. If we do this, the range of credible predictions will likely narrow (perhaps coalescing around the early 2030s, given what we know now), and the outliers will be easier to spot as dubious. Stakeholders will get a more consistent message, which will help them allocate resources appropriately: neither in blind panic nor in reckless delay, but in steady, proactive preparation.
The trouble with quantum computing predictions so far has been that too many have been more speculation than science, more influenced by bias than by balanced analysis. We have the tools and knowledge to do better. By embracing a data-driven, scenario-based approach, we can turn timeline forecasting from a source of confusion into a valuable planning aid.