What Quantum Computers Can Do Better Than Classical Computers
Table of Contents
On a crisp autumn night in 2019, a team of Google engineers crowded around a dilution refrigerator housing a chip called Sycamore. When their measurements came in, cheers erupted – the 53-qubit processor had performed in 200 seconds a calculation that, by Google’s estimate, would have taken the world’s fastest supercomputer 10,000 years. This milestone, heralded as the first demonstration of quantum supremacy, was a dramatic proof-of-concept that under the right conditions a quantum computer can outrun any classical machine. IBM quickly tempered the excitement, arguing that with clever algorithms the task could actually be done on a classical supercomputer in just 2.5 days, not millennia. Debate aside, Google’s experiment showed the world a tantalizing glimpse of the future: a future where quantum computers tackle problems that overwhelm even our mightiest classical computers. It raises the question – beyond stunt experiments, what kinds of problems are quantum computers uniquely suited for, and why?
To understand where quantum computers shine, it helps to grasp how fundamentally different they are under the hood. A classical computer processes bits that are strictly 0 or 1, like light switches firmly in the “on” or “off” position. A quantum computer, by contrast, uses quantum bits, or qubits, which can inhabit a mysterious blend of states thanks to a quantum phenomenon called superposition. In essence, a qubit can be 0, 1, or both at the same time – a bit like a coin spinning in mid-air, showing neither heads nor tails but a blur of possibilities. In fact, eight qubits can exist in all $$2^8=256$$ combinations of 0s and 1s simultaneously. It’s as if those qubits conduct parallel computations in a single device – exploring a multitude of outcomes at once – whereas eight classical bits could only hold one of those 256 numbers at any moment.
Qubits can also become entangled, linking their states so that a measurement on one instantly influences the other, no matter how far apart they are. Albert Einstein famously called entanglement “spooky action at a distance,” but this spooky coordination is a real and crucial resource for quantum computing.
Finally, qubits exhibit quantum interference – their probability waves can combine, reinforcing some outcomes and canceling others. This is the secret sauce that lets quantum algorithms amplify the probability of correct answers while wiping out incorrect ones, like noise-canceling headphones for computations. By choreographing how quantum states evolve, a well-designed algorithm makes the probability waves of wrong answers cancel each other out (destructive interference), while amplifying the waves of correct answers (constructive interference). In essence, quantum computation uses interference to “zero in” on solutions among an exponentially large set of possibilities. A classical computer would have to slog through possibilities one by one, but a quantum computer can use interference to eliminate vast swaths of wrong paths in a single computation. This is why, for certain problems with the right structure, quantum computers can find answers much faster than any classical machine. Imagine a quantum computer searching a maze: rather than brute-force checking every path, it’s as though the wrong turns cancel themselves out and the right path is magically enhanced. However, this advantage applies only to specific kinds of problems. The challenge is “choreographing” interference for useful tasks – something only a few algorithms achieve so far
These exotic features – superposition, entanglement, interference – give quantum computers a novel computational toolkit. But what does that mean in practice? What specific problems can harness these effects to beat a classical computer?
When Quantum Outruns Classical
Factoring the Unfactorable: Shor’s Algorithm and Cryptography
One of the most famous quantum breakthroughs is Shor’s algorithm for integer factorization. In 1994, mathematician Peter Shor discovered a quantum algorithm that could factor large numbers exponentially faster than any known classical method. In practical terms, this means a sufficiently powerful quantum computer could crack RSA encryption (which relies on the difficulty of factoring huge numbers) – a task essentially impossible for classical computers. Shor’s algorithm was a watershed moment that showed in principle the dramatic speedups quantum computing can offer: it turned a problem thought to require astronomical time into one solvable in polynomial time.
Shor’s result remains the poster child for quantum advantage. It demonstrated how a quantum machine could break the most advanced cryptographic systems of the time. For example, today’s 2048-bit RSA keys would take classical supercomputers longer than the age of the universe to factor. A quantum computer running Shor’s algorithm, by contrast, could factor it in a feasible time if the computer is large enough. How large? One estimate is that about 20 million qubits would be needed to crack a 2048-bit RSA key in roughly 8 hours. This is far beyond today’s quantum hardware (which is just crossing the 1000-qubit scale), so RSA and other cryptography isn’t doomed yet. But the threat is serious enough that the world is actively developing post-quantum cryptography to replace RSA before quantum computers reach that power.
In summary, factorization is a prime example of a problem that quantum computers can eventually do better (indeed, unimaginably faster) than classical ones. Shor’s algorithm leverages quantum Fourier transforms and interference to find factors in polynomial time, a direct application of quantum parallelism that has no known efficient classical counterpart. It remains a critical driver of quantum research – and a reason governments and companies are racing to build bigger quantum machines.
Searching Unstructured Data: Grover’s Algorithm
Another foundational quantum algorithm is Grover’s algorithm, developed by Lov Grover in 1996, which tackles the task of searching an unstructured database. Suppose you have N unsorted items and you want to find one special item – classically, you might have to check O(N) items on average. Grover’s quantum algorithm finds the item in roughly O(√N) steps. This is a quadratic speedup: not as dramatic as Shor’s exponential leap, but still a significant acceleration for large N.
Grover’s algorithm works by repeatedly amplifying the probability of the correct answer using interference, essentially “marking” the correct item’s amplitude and then interfering all states so that the marked one stands out. It’s been proven that this √N scaling is the best possible for an unstructured search – and Grover’s algorithm achieves that optimal bound. In practical terms, Grover’s speedup could be applied as a subroutine to many problems: anything that involves brute-force search (password cracking, certain optimization or SAT problems, etc.) can theoretically get a quadratic speedup from Grover.
It’s important to note that quadratic speedup is still polynomial, so it doesn’t make intractable problems easy – an exponential number of possibilities remains exponential even under a square root. For example, an NP-complete problem with $$2^n$$ possibilities would still take ~$$2^{(n/2)}$$ steps with Grover, which is huge. So Grover’s algorithm does not break NP-complete problems in general. What it does is provide a valuable (and broadly applicable) constant-factor speedup that could be very useful when combined with other techniques. Many proposed quantum speedups in areas like cryptography (e.g. searching keyspaces) or even machine learning (for searching solution spaces) boil down to Grover’s algorithm or variants of it.
In summary, Grover’s algorithm shows that for unstructured search tasks, a quantum computer needs only the square root of the number of steps a classical computer would need. It exemplifies how quantum superposition and interference can quadratically accelerate general search problems. This is a more modest but still meaningful improvement, and unlike Shor’s very specific application, Grover’s approach can be applied to a wide range of search and optimization tasks.
Simulating Nature: Quantum Chemistry and Materials
Richard Feynman famously quipped, “Nature isn’t classical, dammit, so if you want to simulate it, you’d better make it quantum mechanical.” Simulation of quantum systems (like molecules, materials, and atomic processes) is widely considered the most natural application for quantum computers. The reason is intuitive: quantum systems are exponentially complex for classical computers to model, but a quantum computer can represent and evolve quantum states directly. In other words, quantum computers can simulate other quantum systems with far less effort because they follow the same physical rules.
Classical supercomputers struggle to simulate the electronic structure of anything but the smallest molecules. The computational cost blows up exponentially as you add more atoms or electrons – very quickly exceeding any feasible amount of memory or time. Quantum computers, by using qubits to represent the quantum states of a system, don’t face that exponential blowup (at least not in the same way). Many-body quantum equations are essentially intractable (on classical machines), taking the age of the universe, or longer, to solve. A quantum computer doesn’t have this problem – by definition the qubits already know how to follow the laws of quantum physics. In principle, that means a quantum computer could simulate chemical reactions and material properties that no classical computer ever could.
Quantum simulation was in fact the original motivation for quantum computing in Feynman’s 1982 paper. Today, it remains one of the most promising near-term uses. For instance, quantum computers have begun simulating simple molecules: finding the ground-state energy of hydrogen and lithium hydride, or modeling small reaction dynamics using a few qubits. These problems are toy-sized, but they validate the approach. As hardware scales, the goal is to tackle useful chemistry problems – designing new pharmaceuticals, catalysts, or materials – which are currently beyond reach. Pharmaceutical research could be revolutionized by simulating drug molecule interactions exactly instead of relying on trial-and-error, potentially “massively speeding up the R&D of life-saving new drugs and treatments”. In chemistry and materials science, quantum simulation could lead to discovering better catalysts for industrial processes (e.g. for carbon capture or fertilizer production) or understanding high-temperature superconductors. These are grand challenges that classical computing can only approximate; quantum computing could eventually solve them outright.
Already, companies and researchers are using small quantum processors to simulate molecular chemistry at a new level of accuracy. For example, variational quantum eigensolver (VQE) algorithms (a hybrid quantum-classical method) have been used to calculate the energy of molecular hydrogen, water, and other compounds on current quantum hardware. Volkswagen, Mercedes-Benz, and others have interest in quantum simulation for battery materials – aiming to improve EV batteries by modeling new cathode chemistries. In fact, automakers like Daimler, Toyota, and Hyundai have partnered with quantum firms to accelerate battery research: Hyundai is working with IonQ to use quantum computations (VQE) to study lithium compounds for next-gen batteries.
Why is simulation such a quantum-suitable problem? Because it directly leverages what quantum computers do best: mimicking quantum systems. A quantum computer can naturally represent the entangled, multi-particle state of a molecule, something that would require an exponentially large classical data structure. In a sense, only a quantum computer can efficiently predict the behavior of a complex quantum system. This is likely where we will see the first real practical quantum advantage – solving a chemistry or materials problem that is truly beyond classical reach. Recently, a peer-reviewed study showed a D-Wave quantum annealer outperforming classical supercomputers for simulating a particular quantum magnetic material (a spin-glass system). That quantum machine did in minutes what would have taken classical computing millions of years for that specialized simulation. Such results are early indicators that quantum simulation of useful systems will eventually surpass classical methods, fulfilling Feynman’s vision.
Optimization and Annealing: Tackling Combinatorial Challenges
Many real-world problems – from scheduling flights, to optimizing supply chains, to configuring a factory floor – boil down to combinatorial optimization. These problems involve searching for the best combination out of an enormous number of possibilities. Classical algorithms often struggle with such problems when they grow large; they can be NP-hard and generally require clever heuristics or approximations. Quantum computing offers new approaches here, notably through quantum annealing and gate-based algorithms like QAOA (Quantum Approximate Optimization Algorithm).
Quantum annealing is a quantum computing paradigm tailored specifically for solving optimization problems by exploiting nature’s tendency to seek low-energy states. D-Wave Systems pioneered this approach with dedicated annealing processors. The idea is to map the optimization problem onto a landscape of “energy” (imagine a hilly terrain where the lowest valley corresponds to the optimal solution). A D-Wave quantum annealer then uses quantum physics – including tunneling and superposition – to let the system evolve into a low-energy state, ideally the global minimum (best solution). In essence, the machine is physically trying many combinations in parallel and naturally settling on a good solution. An analogy: think of trying to find the deepest point in a mountainous region – a classical computer might wander hill by hill, while a quantum annealer behaves like water flowing down to fill the lowest valleys all at once.
Example: Volkswagen famously tested a traffic-routing optimization on a D-Wave annealer. In a 2019 pilot in Lisbon, VW equipped city buses with a quantum-powered traffic management system. The D-Wave quantum computer calculated the fastest routes for each of 9 buses almost in real time, adjusting to minimize congestion. This was one of the first real-world demonstrations of a quantum optimizer tackling a complex urban problem. While small in scale, it showed that even today’s nascent quantum annealers can handle dynamic optimization (in this case, a form of the traveling salesman problem for bus routes).
Quantum annealing has also been applied (in experimental or research settings) to problems like: portfolio optimization in finance, scheduling of manufacturing processes, optimizing wireless network configurations, and more. D-Wave’s latest Advantage systems offer over 5,000 qubits for larger optimization problems, and the company has reported instances of finding higher-quality solutions than classical methods for certain hard cases. D-Wave even claimed quantum computational advantage in a material simulation problem (as noted earlier) using their new Advantage2 annealer prototype – a milestone suggesting these machines are reaching regimes where classical heuristics falter.
On the gate-model side, the leading approach is the Quantum Approximate Optimization Algorithm (QA. QAOA is a hybrid algorithm that runs on circuit-based quantum computers. It works by applying a series of parameterized quantum operations (mixing and phase-separating operations) to an initial state, and then using a classical computer to tweak those parameters iteratively to improve the result. In effect, the quantum circuit generates a candidate solution (a bitstring) and the classical optimizer adjusts the circuit to make that solution better step by step. QAOA is designed to eventually output a bitstring that encodes a very low-cost solution to the optimization problem at hand. It’s a promising algorithm for things like job scheduling, Max-Cut graph partitioning, or any NP-hard optimization cast as minimizing some cost function.
The jury is still out on how much advantage QAOA can offer as devices scale. In some early cases, a shallow QAOA had better approximation ratios than known classical algorithms, but then improved classical algorithms closed the gap. The relative performance of QAOA vs. classical heuristics remains an active research question. Still, QAOA is valuable in that it can run on today’s small quantum processors, and it embodies the hybrid strategy likely needed for near-term quantum success (more on hybrid later). Researchers have already run QAOA on up to ~40-qubit problems on actual hardware, and as qubit counts and quality improve, we may see QAOA deliver faster or better solutions for certain tough optimizations.
In summary, optimization is a key area where quantum computers aim to do better than classical ones. Quantum annealers provide a specialized, analog way to attack these problems by literally finding low-energy states of a physical system that represents the math problem. Gate-model algorithms like QAOA attack the same problems with a digital quantum approach, using interference to concentrate probability on good solutions. We have early evidence of quantum machines solving certain optimization instances either faster or with better quality results than classical solvers, but generally for very specific cases. As hardware improves, the hope is that quantum optimizers will consistently beat classical methods for large-scale, complex optimization tasks that businesses care about – from supply chain design to AI hyperparameter tuning.
The Quest in Machine Learning
Machine learning (ML) and artificial intelligence drive much of today’s computing demand – and researchers are eager to see if quantum computers can provide an edge in this domain. The idea of Quantum Machine Learning (QML) is to use quantum algorithms to accelerate or improve tasks like classification, clustering, regression, or generative modeling. Quantum computers have a natural affinity for linear algebra (they manipulate state vectors in Hilbert space), so many QML approaches focus on speeding up linear algebra subroutines that appear in ML algorithms. For example, there are quantum algorithms for solving linear systems of equations (HHL algorithm) or for performing principal component analysis on quantum states, which in theory offer exponential or significant speedups for those specific mathematical problems.
One promising area is quantum-enhanced feature spaces for things like support vector machines or kernel methods. A quantum computer can encode data into a quantum state (a process called amplitude encoding) and potentially compute certain kernel functions or distances much faster by exploiting high-dimensional quantum state spaces. Another is quantum annealing for ML: using D-Wave annealers to do things like Boltzmann sampling or discrete optimization for ML models (for example, feature selection or training certain combinatorial models).
However, practical quantum advantage in ML is still mostly theoretical at this stage. There is evidence that some quantum algorithms might be able to look at datasets in a new way, potentially speeding up parts of machine learning computations. For example, a quantum algorithm can sample from a probability distribution or estimate the mean of a function faster than classical (this relates to Grover’s speedup and amplitude amplification, which can quadratically speed up Monte Carlo simulations, a technique used in ML). Indeed, quantum amplitude estimation can accelerate Monte Carlo sampling, which underlies many machine learning and finance algorithms. In a joint experiment, IBM and JPMorgan researchers showed that a quantum approach to option pricing (essentially an ML-esque simulation problem) could reach the same accuracy with thousands of samples instead of millions for a classical Monte Carlo – a quadratic speedup in action.
That said, a major hurdle for QML is data loading and output. The phrase “garbage in, garbage out” applies strongly: getting classical data into a quantum state can cost so much time that it offsets the quantum speedup. As a RAND report noted, “efficient preparation of the data required to run a quantum computer” has been a serious roadblock. If you have to spend exponential time encoding your dataset into qubits, any subsequent quantum speedup may be moot. Similarly, extracting the result (for instance, getting all the bits of a solution vector) from the quantum computer can require many repeated runs, slowing things down. Researchers are actively exploring ways to mitigate this, such as quantum-inspired data compression or hybrid schemes where the quantum computer only handles the hard part of the ML task and the rest stays classical.
In terms of current achievements, quantum machine learning experiments so far include: small-scale classification (e.g. using a quantum support vector machine on a toy dataset), training tiny quantum neural networks (often called quantum circuits or variational quantum classifiers) to recognize patterns, and quantum generative models that learn the probability distribution of simple data. These have mostly been proofs of concept on a few qubits, showing that the pipeline (encode data → run quantum model → get result) can work. For instance, Amplitude Encoding plus a variational quantum circuit has been used to classify hand-written digit images in a very limited form, or to detect correlations in financial data. No fundamental speedup has been realized yet in these tasks because the problem sizes are small – classical computers can handle them easily. The hope is that as quantum computers grow, they might start outperforming classical ML on tasks like quantum chemistry data (where the data itself might come from a quantum process) or certain high-dimensional patterns where classical algorithms bog down.
It’s worth tempering expectations: many experts believe optimization and simulation will yield useful quantum advantages before general machine learning does. The reasoning is that most ML tasks (like image recognition or language modeling) are not obviously amenable to exponential quantum speedups – they involve a lot of structure and regular data that classical algorithms already exploit well. Moreover, today’s ML benefits hugely from scale of data (“big data”), whereas quantum computers actually find their sweet spot in “small data, big compute” situations (because loading huge data into qubits is problematic). So the intersection of quantum and ML might be in scenarios where data is limited but the computation needed is astronomical – for example, analyzing quantum physics data, or improving optimization within ML (like hyperparameter tuning via quantum methods).
In summary, quantum machine learning is an exciting but longer-term avenue. There are theoretical reasons to believe certain learning tasks could see at least polynomial speedups, and maybe new quantum-inspired models of learning that have better accuracy or need less data. Some initial experiments and algorithms (quantum SVMs, quantum recommendation systems, etc.) have shown promise in theory. But for now, many challenges – especially data I/O and error rates – mean that classical machine learning will continue to dominate. The likely scenario is that quantum computers will accelerate specific subroutines in ML (like faster kernel evaluations, sampling, or optimization) rather than entirely replace classical ML workflows. As we’ll discuss, this fits into a broader view that the future is hybrid.
Not Every Problem Is Quantum-Friendly
For all the hype, it’s crucial to note that quantum computers won’t automatically solve every problem faster than classical computers. In computational complexity terms, the problems that quantum computers can solve efficiently belong to a class called BQP (bounded-error quantum polynomial time). While BQP includes some famous problems that are classically hard (like factoring), experts believe it does not include the truly gnarly NP-complete problems that haunt computer science. In other words, the Traveling Salesman Problem, boolean satisfiability, and other NP-complete challenges are thought to remain intractable even for quantum computers. There is no known “quantum super-algorithm” that cracks these problems in polynomial time – and most computer scientists suspect no such algorithm exists. Grover’s algorithm, for instance, offers at best a square-root speedup for NP-complete problems, which still means exponential time overall in the worst case. If a problem would take $$2^n$$ steps classically, Grover might reduce it to about $$2^{n/2}$$ steps – better, but still exponential growth as n increases. That’s a far cry from “breaking” the exponential barrier.
The idea that a quantum computer can “try all possibilities at once” and instantly pick the right answer is a common misconception. Yes, a quantum machine can hold a superposition of an astronomical number of possible states, but you only get to measure one outcome – you don’t get all the answers for free. The power of a quantum algorithm lies in guiding the interference of those possibilities toward the correct answer. If a problem doesn’t have some structure that a quantum algorithm can exploit to cancel wrong answers and amplify the right one, a quantum computer provides no speed advantage at all. For many of the hardest known problems, we simply don’t know of any structure to exploit – and decades of research suggest we might never find one. So, while quantum computers will no doubt expand the frontier of what’s computationally feasible, they won’t make brute-force solution of arbitrary problems trivial. NP-hard is still hard. For the enterprise audience, this means one shouldn’t expect a quantum computer to suddenly solve, say, an arbitrary traveling salesman optimization with thousands of cities to absolute optimality in milliseconds – there’s no evidence that’s possible. What quantum computers can do is offer new heuristic approaches and sometimes provably faster algorithms for certain special cases or structured versions of these tough problems. But classical algorithms and heuristics will continue to improve too. In many cases, quantum will complement classical rather than completely supplant it. Knowing which problems are quantum-friendly – meaning they have the algebraic or probabilistic structure that a quantum algorithm can leverage – will be key. Tasks involving lots of linear algebra, matrix operations, or finding patterns in mathematical structures tend to be more amenable to quantum methods than, say, simple arithmetic or highly sequential logic. As quantum researchers often remind us, quantum computers won’t replace classical computers for everything; they’ll excel at some things and be no better than a classical computer for others. Let’s examine what today’s nascent quantum machines can already do, and what remains firmly in classical territory.
Quantum Computing in Practice Today
At this moment, in 2025, quantum computers are still laboratory curiosities and prototypes. They are not general-purpose replacements for classical computers – in fact, by most metrics (speed, error rates, capacity) they lag far behind. Most researchers who have worked with actual quantum hardware will tell you these machines are temperamental and extremely limited. Winfried Hensinger, a pioneer in ion-trap quantum computing, bluntly said of his five quantum machines, “They’re all terrible. They can’t do anything useful.” That’s harsh but not far from the truth. The Google supremacy experiment, for example, was essentially performing a very narrow task (random circuit sampling) that was engineered to be hard for a classical computer yet feasible for their quantum chip. It’s not an application with direct practical value (aside from bragging rights and scientific insight). Likewise, a Chinese team used photonic quantum computers to outdo classical computers at a contrived task called boson sampling – again a remarkable scientific feat, but not something with immediate real-world impact.
Still, we have seen meaningful progress. IBM, one of the leaders in this field, has put quantum computers in the cloud (the IBM Quantum Experience) since 2016, allowing over a million users – from researchers to curious enthusiasts – to experiment with small-scale quantum programs. These devices started with 5 qubits and have grown steadily. In 2022, IBM unveiled “Osprey,” a quantum processor with 433 qubits, the largest of its kind at the time. This was more than triple the qubits on their previous chip, and IBM announced a roadmap to break the 1,000-qubit barrier with a chip called “Condor”, which they achieved by the end of 2023. More qubits are generally better, but only if their quality is maintained – adding qubits while keeping error rates in check is the real challenge. Google, for its part, has been focusing on improving qubit fidelity and error correction. In 2024, Google’s Quantum AI lab debuted a 72-qubit chip nicknamed “Willow” that made headlines for a different reason: it demonstrated that quantum error correction actually works in practice, albeit on a small scale. Using Willow, they showed that combining multiple physical qubits into one “logical” qubit could reduce the overall error rate, a key step toward building reliable large-scale systems. It was a significant research milestone: “the first quantum processor where error-corrected qubits get exponentially better as they get bigger,” according to Google’s team. Even so, we remain far from the kind of fault-tolerant quantum computer needed to run Shor’s algorithm on meaningful sizes or to handle big industrial problems.
What can today’s quantum processors do usefully? Thus far, their accomplishments are mostly in line with early expectations: quantum simulation and proof-of-concept optimization. In August 2020, Google used the Sycamore processor to perform the largest quantum chemistry simulation to date – they simulated a modest chemical mechanism (a 12-qubit Hartree-Fock approximation for a molecular system) by pairing Sycamore with a classical computing cluster. The quantum chip prepared a superposition approximating the molecule’s electron configuration, and a classical computer iteratively refined the parameters. This kind of hybrid quantum-classical simulation foreshadows how we might solve chemistry problems faster than classical computers alone, even if the quantum part is still small. Similarly, IBM and academic collaborators have used quantum processors to calculate the energy surfaces of simple molecules like BeH₂ and LiH, validating that quantum machines can handle chemistry computations and even rival specialized classical simulations for those small cases. None of these simulations are beyond classical reach yet – but with each incremental hardware improvement, the size of the molecule or material we can tackle grows. The hope is that in a few years, quantum computers will start providing insight into chemical systems that classical methods cannot handle due to exponential complexity. That would mark a true quantum advantage in practical terms, likely in materials design or drug discovery.
In optimization and machine learning, quantum computing is still entirely experimental. One notable project involved traffic flow optimization: as mentioned, Volkswagen tested a quantum algorithm on a D-Wave machine to optimize taxi routes in Beijing, and they reported significantly improved travel times in the simulation. Financial institutions have trialed quantum algorithms for portfolio optimization and risk analysis, albeit on toy problems. Another arena is quantum machine learning – researchers are exploring whether quantum computers could accelerate certain linear algebra tasks at the heart of machine learning. There is theoretical evidence of speedups for things like solving linear systems or computing large matrix products (HHL algorithm, quantum matrix multiplication), which could translate to faster training of ML models. However, current quantum hardware is far too noisy and limited to outperform classical GPUs on any real machine learning task. At best, small quantum processors have been used to classify tiny datasets or to calculate simple statistical properties as a demonstration. The consensus is that quantum machine learning will require more qubits and better error rates to show a clear advantage.
The most unequivocal achievement so far remains that quantum supremacy sampling task – essentially solving a problem no classical computer could solve in any reasonable time. While that problem itself was esoteric, it proved end-to-end that a quantum device with ~50 high-quality qubits can outperform a classical supercomputer. It’s a bit like the Wright brothers’ first flight: the flight itself didn’t carry any passengers or cargo, but it showed that heavier-than-air flight was possible. Similarly, we now have no doubt that quantum computational speedups are physically real. The task ahead is to extend those speedups to problems that matter. None of the early demonstrations have been truly useful yet, and in some cases classical cleverness later matched the quantum performance. But the threshold has been crossed in principle, and each year quantum hardware improves, closing the gap between demonstration and application.
Why Classical Computers Still Reign (For Now)
Despite the flurry of breakthroughs, it bears emphasizing that in 2025 the classical computer is still king in practice. Your laptops, servers, and supercomputers are not going obsolete anytime soon. Quantum computers remain slow, delicate, and highly specialized devices. For most tasks that businesses care about – from running a database to rendering video to powering AI inferences – today’s classical computers not only outperform quantum computers, they absolutely trounce them. Even for problems that quantum algorithms are theoretically good at (like factoring), current quantum hardware doesn’t have nearly enough qubits or low enough error rates to compete with classical algorithms. As of now, a state-of-the-art classical algorithm on a powerful classical computer can factor numbers far larger than what the best quantum computer can handle. The largest number factored on a quantum device so far is trivial (on the order of 21 = 3×7, a demo by IBM) – whereas classical computers have factored numbers with hundreds of digits. It will likely be a few more years before a quantum computer actually factors a number beyond what classical computing can do, despite the theoretical advantage.
The reasons are both hardware limitations and fundamental constraints. Today’s quantum processors are extremely fragile. Qubits must be isolated from any stray noise – vibrations, electromagnetic interference, heat – because interaction with the environment causes them to lose their quantum state, a phenomenon called decoherence. Maintaining a qubit’s superposition and entanglement for more than a brief moment is a bit like trying to balance a pencil on its tip in a windstorm. In superconducting quantum computers (like IBM’s and Google’s), qubits only stay coherent for microseconds. Trapped ion qubits (like those used by IonQ and others) have longer coherence times, but operations on them are slower. In all cases, errors creep in quickly. The error rates of quantum gates (the basic operations) are typically on the order of 0.1% or 1% per operation, which sounds small but compounds quickly when you perform thousands of operations. By contrast, classical computers have error rates so low they’re effectively negligible – perhaps one error in $$10^{18}$$ operations , thanks to stable hardware and error-correcting codes. The best quantum computers today might get through a few thousand operations before an error occurs. That’s an error every second or so at the frequencies these chips run, which is enormously high. An errant bit-flip in a classical computer is a rare fluke; an errant qubit flip in a quantum computer is an expected event every time you run a complex circuit.
To actually leverage algorithms like Shor’s factoring, we need fault-tolerant quantum computing, meaning we must use quantum error correction to counteract those physical errors. Error correction itself is incredibly resource-intensive – it requires a large overhead of physical qubits to create a single logical qubit that is stable. Estimates vary, but breaking something like RSA-2048 via Shor’s algorithm might require on the order of thousands of logical qubits, which in turn could require millions of physical qubits with today’s error rates. We currently have devices with tens or hundreds of qubits. Clearly, there’s a long journey ahead. As one Nobel laureate, Frank Wilczek, recently remarked, “Quantum computing remains a research endeavor, and classical computers will continue to outperform them for the foreseeable future.” In other words, we are not on the cusp of quantum computers overtaking classical computers in general performance – not this year, and likely not this decade.
Moreover, classical computers are not standing still. Improved algorithms and heuristics often step up to meet challenges posed as quantum targets. For instance, when Google demonstrated quantum supremacy, it chose a task that was thought to be infeasible for classical simulation. After the fact, classical researchers optimized their methods and managed to simulate larger portions of the task than expected (though still not the full thing at the quantum scale). This dynamic will continue: as quantum computing pushes into new territory, classical computing fights back with better algorithms or hardware to narrow the gap. We saw this when early quantum chemistry experiments were matched by clever classical simulation techniques. The takeaway for decision-makers is to be realistic: classical computers are extraordinarily powerful for most problems we care about, and they will continue to be indispensable. Quantum computers, even as they improve, will for a long time function as accelerators for specific tasks rather than wholesale replacements. As one expert put it, quantum and classical systems will coexist, each specializing in those tasks they can do better. For anything that is easily handled by classical algorithms, a quantum computer is overkill – and an expensive, finicky one at that. Quantum hardware requires cryogenic refrigeration, isolated lab conditions, and constant calibration. At least in the near term, we should reserve quantum computers for problems so demanding that even supercomputers struggle – everything else is faster and cheaper on classical silicon.
The Best of Both Worlds: Hybrid Quantum-Classical Computing
Rather than a rivalry, many experts see the future of computing as a hybrid partnership between quantum and classical processors. In this paradigm, quantum computers would function as specialized co-processors, similar to how we use GPUs for graphics or AI accelerators for neural networks today. A classical computer would run the overall program, delegating certain hard subroutines to a quantum device which, for example, might solve an optimization sub-problem or evaluate a quantum mechanical simulation, and then pass the result back. We’re already practicing this approach because it’s the only viable mode with current hardware: hybrid algorithms use a classical computer to orchestrate and interpret the output of a quantum computer. A prime example is the Variational Quantum Eigensolver (VQE), used for chemistry simulations. In VQE, the quantum computer prepares a trial quantum state (e.g. a guessed electron configuration for a molecule) and measures its energy, while a classical computer adjusts the parameters and guides the search for the minimum energy. This iterative loop continues, quantum and classical each doing what they do best. Similar hybrid strategies, like the Quantum Approximate Optimization Algorithm (QAOA), pair quantum circuits with classical optimizers to tackle combinatorial problems. In QAOA, a quantum circuit is used to probe the solution space and a classical algorithm updates the parameters – together inching toward a high-quality solution. These hybrid algorithms have shown promise on small problems, and importantly they can often tolerate the noise of today’s quantum hardware by offloading some work to the classical side.
All major quantum computing players are betting on the cloud as the delivery mechanism for this hybrid model. Instead of selling quantum computers that you install on-premises (which would be impractical given the extreme cooling and maintenance requirements), companies like IBM, Google, Amazon, and Microsoft offer access to quantum processors via cloud services. In practice, a developer writes code in a high-level language (like Q#, Qiskit, or Cirq) on a classical computer, and API calls send the heavy-lifting parts to a quantum machine in a remote data center. The result comes back to the classical front-end, which continues the computation. This way, end users don’t have to worry about the messy quantum physics behind the scenes; they simply treat the quantum computer as a specialized API. We can envision, in a few years, an enterprise workflow where a cloud-based quantum solver is invoked to tackle, say, an optimization step in a larger software pipeline – much as today an AI service might be called to do image recognition inside an application. Eventually, advanced compilers and scheduling systems will determine on the fly which parts of a computation should run on a CPU, which on a GPU, and which on a quantum processing unit (QPU). The end-user might not even know or care that quantum resources were used – only that the result came faster.
Another aspect of the hybrid future is modular quantum computing – essentially building quantum computers in a distributed way and linking them with both quantum and classical channels. IBM’s upcoming Quantum System Two architecture, for example, is designed to connect multiple smaller quantum processors into one larger virtual processor. This modular approach is analogous to how classical supercomputers are built from many nodes: each module might be a 1,000-qubit quantum computer, and by entangling modules together via quantum interconnects (and coordinating them classically), one could scale to millions of qubits. Classical computers would play a crucial role in error correction across modules and in orchestrating the communication. Companies are also exploring photonic interconnects to create a “quantum internet” that could network quantum devices. In all these visions, quantum and classical technologies are deeply intertwined. Rather than thinking of quantum computers as separate, black-box supermachines, it’s more accurate to think of them as new components to be integrated into the existing computing landscape.
What does this mean for CIOs and tech strategists? It means that adopting quantum computing will likely not be a matter of ripping out classical infrastructure and replacing it, but rather augmenting your compute resources with quantum acceleration where it provides an edge. It also means that skills in both classical and quantum computing will be valuable – the most effective algorithms in the near future will be those that split tasks between classical and quantum cleverly. We’re already seeing early commercial experiments with this hybrid model: car manufacturers using quantum annealers alongside classical solvers for traffic optimization, logistics companies mixing quantum route optimization with classical heuristics, and biotech firms using quantum chemistry calculations within a larger classical simulation pipeline for drug discovery. In each case, the quantum piece addresses a slice of the problem and the classical system handles everything else. For the foreseeable future, this collaborative approach is the pragmatic one. Quantum computers will not replace classical computers any more than an MRI machine replaces a stethoscope – they’re different tools for different jobs, often best used together.
Challenges on the Road to Large-Scale Quantum Computing
Before quantum computers can realize their full potential, several daunting engineering challenges must be overcome. We’ve touched on the biggest one: error correction and decoherence. The current generation of devices are called NISQ (Noisy Intermediate-Scale Quantum) devices – “noisy” because every operation is prone to error, and “intermediate-scale” because they have on the order of 50–1000 qubits, nowhere near enough for error-corrected, fault-tolerant operation. To scale further, researchers are pursuing multiple approaches. One is improving the physical qubit quality – increasing coherence times and reducing gate error rates through better materials, fabrication, and control techniques. Another is developing smarter error-correction codes that require fewer overhead qubits. There have been encouraging steps: for example, as mentioned, Google demonstrated that adding more qubits to their error-correcting scheme resulted in longer-lived logical qubits (a sign that the trade-off can eventually pay off). In 2023, researchers achieved the first experimental quantum error-correction “break even” point, where an error-corrected logical qubit outperformed the worst of its constituent physical qubits, effectively prolonging the quantum information’s lifetime beyond the physical limits. These are early milestones on a journey that likely requires inventing new technologies – perhaps new qubit types or error-correction methods – to reach the scales needed for general-purpose quantum computing. It’s worth noting that multiple qubit technologies are being explored: superconducting circuits (used by IBM, Google, Rigetti), trapped ions (IonQ, Quantinuum), neutral atoms (Pasqal, QuEra), photonic qubits (Xanadu), and others. Each has pros and cons: ions have long coherence but slow gates; superconductors are fast but decohere quickly; photonics don’t decohere at all in flight but are hard to implement two-qubit gates on, and so on. The race is still open as to which modality (or which combination) will ultimately scale best. It could even be a hybrid of its own – for instance, using superconducting qubits for processing and photonic links for communication.
There are also significant challenges in the software and algorithm layer. Writing efficient quantum algorithms is notoriously hard; the space of quantum operations is vast and unintuitive. For classical computers, decades of compiler optimizations and software engineering practices help developers harness hardware performance. Quantum computing will need analogous advances – compilers that can optimize quantum circuits, error mitigation techniques in software, and high-level libraries to make quantum programming accessible. Today, programming a quantum computer often feels like hand-crafting instructions for an early mainframe – it’s low-level and requires quantum physics knowledge. In the future, we’d want it to be as easy as calling a library function. Companies are working on these tools, and there’s an active open-source community developing frameworks (like Qiskit, Cirq, Pennylane, etc.). But this is all in its infancy. Better algorithms are also an area of active research: for example, quantum algorithms for machine learning, for differential equations, for graph problems – some show promise, but none is a slam dunk yet. It could be that the “killer app” of quantum computing hasn’t been discovered; Shor’s and Grover’s algorithms were both found in the 1990s, and it’s possible that as hardware grows, we’ll uncover new algorithms that make even better use of it.
Another practical hurdle: integration and throughput. A quantum computer today can perform maybe a few hundred operations per second at most (because of the need to reset and calibrate between runs). Classical computers execute billions of operations per second. Bridging that gap for real workloads will require improving the raw quantum gate speeds and the I/O (how fast we can load problems in and read results out). Some architectures, like photonic quantum computers, could have inherently higher throughput, but they currently lag in other aspects. The bottom line is, building a large-scale, fully programmable, error-corrected quantum computer is one of the most difficult engineering challenges ever attempted – often likened to the Moon landing of computation. It might take another decade or more of R&D and possibly some breakthroughs we can’t foresee. As a 2022 review noted, “Quantum computers are exceedingly difficult to engineer, build and program… errors in the form of noise, faults and loss of coherence are crippling to their operation”. Keeping thousands or millions of qubits coherent and entangled together is a task beyond any system we have today, and even understanding how to get there is an area of ongoing research.
And yet, none of these challenges appears insurmountable. They will require sustained investment, clever ideas, and probably a few “Eureka” moments in both physics and computer science. Governments and industry are pouring resources into quantum computing precisely because the stakes are high – the first groups to crack these problems will usher in a new era of computing power. It’s reminiscent of the early days of aviation or spaceflight, where progress was slow and iterative until suddenly it wasn’t.
The Quantum Horizon
Standing here in 2025, we can liken the state of quantum computing to where classical computing was in, say, the 1940s or 1950s. We have prototype machines that fill entire rooms (or fridges), prone to glitches and error, yet each new model leaps ahead in capability. We’re still learning what we can do with these machines – much as in the 1940s people were figuring out what digital computers could be used for beyond number-crunching artillery tables. There’s a Wild West feel: new algorithms, new hardware approaches, and even new theoretical insights into quantum information are appearing every year. It’s an exciting and somewhat unpredictable trajectory.
If current trends hold, the next 5-10 years will see quantum computers grow and then using modular architectures to scale to potentially a million qubits by the late 2020s. Most large vendors predict useful error-corrected quantum computer within less than a decade, focusing on quality over quantity of qubits – their goal is to lower error rates enough that logical qubits can be realized without an astronomical overhead. We might reasonably expect that by the end of this decade, there will be cloud-accessible quantum processors with enough oomph to perform certain specialized tasks faster than classical supercomputers – not just contrived tasks, but useful ones like simulating a chemical reaction of industrial significance or cracking a cryptographic challenge that is beyond classical reach (hopefully only as a test, since by then we’ll have quantum-proof encryption deployed!).
In the meantime, a parallel effort is underway to prepare for the day quantum computers become powerful: the field of post-quantum cryptography is developing new encryption algorithms that even a quantum computer can’t easily break, ensuring our secrets remain safe. This is a pragmatic acknowledgment that quantum advantage in cryptography is coming – maybe not tomorrow, but likely within the next 5 to 10 years – and we need to be ready. Governments are already standardizing post-quantum encryption schemes for future-proof security.
We should also consider the broader implications. If and when quantum computers can do things like design drugs or materials by simulation, it could accelerate innovation in pharmaceuticals, energy, and chemicals dramatically – potentially compressing years of R&D into weeks or days of computation. Optimization improvements could save industries billions by finding more efficient supply routes, manufacturing processes, or financial strategies. Quantum machine learning could open up new levels of AI capability by handling data or computations that are currently unmanageable. These are speculative outcomes, but each year brings us small steps closer. History has shown that when a new computational capability emerges, creative minds soon find applications that even the inventors didn’t anticipate.
Yet it’s important to temper techno-utopian expectations. Some problems will remain hard. Quantum computers won’t suddenly make AI conscious or solve world hunger – those require more than just computing power. And there is always the possibility that we hit unforeseen roadblocks: perhaps scaling beyond a certain qubit count proves more challenging than expected, or errors stubbornly persist. Interestingly, if someday we discovered that a large-scale quantum computer cannot be built due to some undiscovered law of physics, that in itself would be a groundbreaking revelation, overturning our understanding of quantum theory. But so far, every experiment has reinforced that quantum mechanics works exactly as advertised, and no fundamental barriers have appeared. It’s engineering and ingenuity that will determine how far we go.
In summary, quantum computers already outperform classical computers on a few specialized tasks, and over the coming years that list of tasks will grow. They excel at problems where superposition and entanglement let them explore a vast landscape of possibilities in parallel and use interference to extract an answer – factoring numbers, searching databases, simulating quantum systems, solving certain optimization problems, and more we have yet to discover. Problems that are highly structured, mathematical, or rooted in quantum physics themselves are especially “quantum-friendly.” Classical computers, on the other hand, still rule the realm of everyday computing and will continue to do so for the foreseeable future, as quantum machines are delicate and best suited for heavy lifting on very specific challenges. The likely scenario is a hybrid one: quantum co-processors accelerating key pieces of computations within larger classical systems. Significant hurdles like error correction, qubit scaling, and programming abstractions are actively being worked on by some of the brightest minds in physics and computer science. The pace of progress suggests that each new generation of quantum hardware will solve bigger problems, inching us toward the first practical, real-world quantum computing applications.
For technology leaders today, the prudent approach is to stay informed and start experimenting. Much as businesses in the 1950s that ignored the coming wave of digital computers were left behind, organizations in the 2020s should keep an eye on quantum computing’s evolution. That doesn’t mean throwing out classical computers, but it means identifying quantum-friendly problems in your industry – whether it’s complex optimization in logistics, molecular modeling in pharmaceuticals, or advanced machine learning – and engaging with the quantum computing ecosystem early. Some banks, for example, now have small teams writing quantum algorithms for portfolio optimization, so that when hardware is ready, they’ll have a head start. National labs and IT giants are offering partnerships and cloud access to help companies develop quantum skills. The investments made now, even if quantum payoffs are a few years out, could yield a competitive edge when the technology matures.