HSBC and IBM’s Quantum-Enabled Bond Trading Breakthrough
Table of Contents
25 Sep 2025 – HSBC and IBM revealed the world’s first-known quantum-enabled algorithmic trading trial in the bond market. In a collaboration bridging banking and cutting-edge tech, the team demonstrated up to a 34% improvement in predicting whether a customer’s bond trade would go through at a quoted price – a significant leap over standard classical methods.
The news, trumpeted in a joint press release and described in more detail in a pre-print paper, made waves. IBM’s research division posted that HSBC used an IBM Quantum Heron processor to achieve “up to a 34% improvement in trade-fill prediction over classical-only methods,” heralding this as the first empirical evidence of quantum computing’s potential to enhance algorithmic trading.
The claim is significant: real quantum hardware, solving a real finance problem with better results than classical computing alone? It sounds almost like the elusive “quantum advantage” finally appearing in a practical setting. But what did HSBC and IBM actually do to reach that 34% boost? (The authors never claimed quantum advantage. Social media did, though.)
Inside the Quantum-Powered Trading Experiment
What HSBC and IBM tackled was a fill probability estimation problem in the European corporate bond market – essentially, the likelihood that a bond trade will be “filled” (executed) when a dealer quotes a price to a client. In bond trading, especially over-the-counter via request-for-quote (RFQ) systems, multiple banks compete to offer a price for a client’s bond trade inquiry. Every dealer wants to win the business, but not at a price that loses money. So their trading algorithms continuously balance competitiveness and profitability. Fill probability is the metric that captures this: given your quoted price, what’s the probability the client will hit it and trade with you? It’s a core piece of any algorithmic market-making strategy, directly affecting profit margins and risk. As the researchers note, even modest improvements in fill probability predictions can translate into meaningful competitive advantage – higher hit rates on desirable trades, avoidance of bad deals, better risk management, and more client flow in the long run. In short, predicting fills correctly means more winning trades and fewer nasty surprises.
However, predicting fills is hard. Bond markets are relatively illiquid and data is sparse, while the number of factors influencing a trade’s outcome is huge (price, size, timing, market conditions, client behavior, etc.) Traditional machine learning models struggle with these complex, noisy datasets – pushing a more complex model often doesn’t help because the data itself is a limiting factor. This is where the HSBC-IBM experiment broke new ground: instead of improving the model, they improved the features fed into the model by using a quantum computer to transform the data.
The methodology centered on a hybrid workflow: HSBC’s team took real, historical trading data (covering hundreds of thousands of RFQs over many trading days) and passed it through a quantum feature extractor – an algorithm run on IBM’s quantum hardware – to create new data features. These quantum-generated features were then used to train classical machine learning models to predict fill probabilities, and the performance was compared against models trained on the original (classical) data alone.
Crucially, the quantum processing was done offline, as a separate step, so it didn’t slow down the real-time trading decisions. The quantum computer acted like a specialized data cruncher that you could query for extra insights when building your prediction model. This design meant the live trading system didn’t need a quantum computer on call; it could simply use the enriched dataset prepared in advance, satisfying the ultra-low-latency requirements of trading.
What Is a Projected Quantum Feature Map?
At the heart of the trial was a technique called Projected Quantum Feature Maps (PQFMs). In essence, a PQFM is a fancy way to generate new features from your data by tapping into a quantum computer’s ability to handle complex transformations.
Here’s how it worked in this case: first, each data point (an RFQ event with a host of classical features describing the trade and market state) was embedded into a quantum state on the IBM Heron processor. The researchers used a parameterized quantum circuit (specifically, a Heisenberg spin-chain ansatz) to encode the 216-dimensional input vector into the amplitudes and interactions of 109 qubits on the Heron chip. In simpler terms, they mapped the numbers from the trading data into the quantum system’s parameters – imagine tuning a complex instrument using the data as the knobs.
Once the data was encoded in the quantum state, the second step was to measure that state in various ways to generate output numbers (the new features). They measured a set of Pauli observables (think of these like different “projections” of the quantum state, such as each qubit’s orientation along X, Y, Z axes) and took the expectation values as the feature outputs. By doing this, one input vector turned into a new vector of quantum-produced measurements.
This whole mapping – from classical data to quantum state to measured quantities – defines the PQFM. It’s “projected” because the quantum state, living in a high-dimensional Hilbert space, is projected back down to a set of numbers you can feed to a classical algorithm. Importantly, IBM’s implementation ensured this transformation was fixed and data-dependent only (not learning on the outcomes), so that any performance gain in predictions could be attributed to the quantum features themselves, not some leakage of information.
IBM’s Heron processors did the heavy lifting here. The Heron is IBM’s latest quantum processing unit, a device with 109 superconducting qubits (arranged in a 133-qubit heavy-hex lattice, though some qubits may not have been used) designed for improved stability and connectivity. It’s currently one of IBM’s highest-performing quantum chips, accessible via the cloud. The team ran two versions of their quantum circuit on Heron – a “shorter” circuit and a “longer” circuit – to generate two sets of quantum-transformed data for comparison. They also ran the same circuits on a noiseless quantum simulator (basically, a classical computer simulating an ideal quantum machine) to see what part of the results was due to quantum magic versus just the algorithm itself. And recognizing that current quantum hardware is noisy, they applied error mitigation techniques (like Pauli twirling and a readout error mitigation method called TREX) to squeeze the best performance out of the hardware runs.
Rigorous Backtesting with Real Data
Having quantum-enriched data is nice, but the real question was: do models trained on these features actually predict fills better on unseen data? To answer that, HSBC’s team set up a rigorous backtesting framework drawn from industry practice. They took a large historical dataset of bond trades (about 1 million RFQs collected over nearly 300 trading days, out of which a subset of ~144k RFQs and 69 days was used for active analysis) and split it into training and test periods in a rolling fashion. For example, a model would train on a certain window of past days, then be tested on the following day’s data, then roll forward – simulating how an algorithm would be retrained and used in production over time. They also enforced a “blinding window,” meaning a gap in time between training data and test data to prevent any overlap or look-ahead bias. This mimics the reality that you’d train your model on recent history and then deploy it, and you want to ensure the test truly reflects new unseen market conditions.
They evaluated a range of common model types – logistic regression, random forests, gradient boosting (XGBoost), and a feed-forward neural network – to see if the quantum features helped across the board. The primary metric was AUC (Area Under the ROC Curve), a standard measure of classification accuracy that’s good for imbalanced outcomes (and indeed, many RFQs might not result in a fill, so the data is skewed). By using AUC on out-of-sample tests repeated over many time windows, the team could gauge how much each model improved when given quantum-transformed data versus normal data. This backtesting approach is notably model-agnostic and outcomes-focused – they weren’t just doing a one-off train/test split, but repeatedly checking performance as if the model were trading live, which adds confidence that any improvement would persist in practice and not just be a lucky fluke.
When Quantum Noise Becomes a Feature
The results of the trial were striking. Models that used the quantum-transformed features significantly outperformed those using only the original classical features. How much better? The headline figure was a ~34% relative gain in prediction accuracy (specifically, out-of-sample AUC) at best. In the experiments, all the classical-only models hovered around a similar performance baseline – roughly a median AUC of 0.63 when predicting fill probabilities, which is typical for this kind of problem. When they fed in features generated by a noiseless quantum simulator, interestingly, the models didn’t improve – in fact, they got a bit worse, around 0.60 AUC. But when using the quantum hardware (Heron) features, the models saw a dramatic jump. With the shorter-depth quantum circuit, median AUC went up to ~0.75; with the longer-depth circuit, it reached an impressive ~0.97. In other words, the best quantum-enriched model was predicting fills with near-97% AUC, a huge improvement over the ~63% from the classical approach. Even the more modest quantum setup beat the classical baseline handily. This pattern held true across all the types of models they tried – whether a simple logistic regression or a neural net, giving the model quantum features made it better at the task.
One of the most intriguing findings was that quantum hardware noise actually seemed to help. Normally, noise in quantum computers is viewed as a nuisance – it corrupts calculations. Yet here, the presence of real hardware noise (the Heron chip is a NISQ device, after all) corresponded with better model performance than either no quantum at all or a perfect noiseless quantum simulation. The authors observed that the quantum-derived features had smoother, more regular distributions than the raw data features, almost as if the quantum processor was filtering out some of the noise in the data by adding a bit of its own kind of noise. They hypothesize that the inherent quantum noise acted as a form of regularization on the feature space, perhaps helping the models generalize better and not overfit to idiosyncrasies in the training data. It’s like the quantum computer applied a creative blur that made the true signal stand out more clearly against the randomness of the market data.
To test whether it was really something unique about the quantum process (and not just a complex random transformation), the team even tried adding artificial noise to the simulated quantum features – but that still didn’t replicate the hardware’s boost. IBM’s blog post on the project highlighted this point: the improvement was obtained on actual IBM Quantum Heron hardware and “was not reproducible” on classical simulations of the same quantum algorithm. That implies there’s a genuinely non-classical effect at play – maybe not in the sci-fi sense of quantum speedup, but in the subtle statistical sense that the quantum machine finds a perspective on the data that classical methods (even classical computers pretending to be quantum) haven’t found. We should note, however, that the researchers are careful not to claim they fully understand why the hardware outperforms the simulator. The mechanism by which quantum noise yields better features remains an open question. This work is empirical – it shows the data and the results, without a sweeping theory of cause and effect behind that 34% improvement. In their words, they “do not infer any generalizable theory or causal economic effect” here, treating the quantum computer as a black box feature generator for now. It’s a pragmatic approach: try it, measure it, and report what happens.
Quantum Advantage or Just a Nifty Hack?
It’s tempting to hail this result as proof of “quantum advantage” in finance – after all, a quantum processor delivered better outcomes than classical computing could, using real-world trading data. Indeed, HSBC’s Head of Quantum Technologies, Philip Intallura, called the trial a “ground-breaking world-first in bond trading,” saying it provides a tangible example of how today’s quantum computers can solve a real business problem and offer a competitive edge.
The fact that IBM’s latest quantum hardware outperformed classical simulations of the same algorithm hints that something inherently quantum is at work. However, it’s important to frame the accomplishment in the right context. This was not about executing trades faster than classical computers, nor about crunching some historically intractable optimization problem in seconds. Rather, it was about augmenting a classical algorithm with quantum-generated features to improve predictive accuracy.
In terms of computation, the heavy lifting of evaluating the model and quoting prices is still classical. The quantum computer’s role was a specialized pre-processing step – valuable, but not replacing the trading engine. So we’re not yet looking at a scenario where quantum computers are running the trading desk; instead, they’re enhancing the toolset of quant developers and data scientists. It’s also worth noting that the quantum advantage demonstrated is task-specific and observed in a carefully controlled backtest.
The HSBC-IBM paper itself cautions that the performance gains are specific to this dataset and problem instance, with no guarantee they generalize to other markets or other types of prediction problems. In other words, this is one data point (albeit an exciting one) in the quest to find useful quantum applications. It proves something – namely, that today’s quantum hardware can add value in a complex statistical learning task – but it doesn’t prove everything. We should be careful about extrapolating it as evidence that “quantum will always beat classical” in finance. The current quantum devices still have severe limitations (the Heron chip used 109 qubits and required thousands of shots per circuit to get stable feature values), and the approach worked because it was cleverly set up to avoid those limitations by running offline and reusing quantum outputs.
That said, this experiment does mark an important milestone. It’s the first time a quantum computer has been shown to improve a real-world financial algorithm at scale. The work stands as a proof-of-concept that quantum computers can be more than theoretical toys or lab experiments – they can integrate into actual business workflows and yield an advantage, even in this NISQ era. It shifts the conversation from “quantum might someday do something useful” to “quantum did something useful, right now, with hardware available today.” As IBM’s Jay Gambetta put it, this kind of exploration shows what’s possible when you combine deep domain expertise (HSBC’s trading knowledge) with cutting-edge quantum algorithms, using each for what it does best. The result is a hybrid approach where quantum and classical complement each other rather than compete.
Quantum Feature Engineering: A New Tool in the Trading Arsenal
One broader implication of HSBC and IBM’s trial is how we think about quantum computing in the near term. Instead of the often-discussed goals like breaking encryption or achieving ultra-fast portfolio optimization (which may require fault-tolerant quantum computers far in the future), here we have a more immediate and modest use: using quantum computers as advanced data transformers or feature generators for classical AI and trading systems.
This is a paradigm that could sidestep some of the barriers to near-term quantum adoption. Since the quantum part can be done asynchronously and offline, even a slow, noisy quantum processor can contribute value without jeopardizing the speed or reliability of the live trading system. It’s a bit like having a very unique coprocessor in the cloud that, given time, can cook up some features that your classical computers just wouldn’t think of. And once those features are cooked, your everyday algorithms can consume them as normal.
The notion that quantum noise itself might be a feature, not a bug in some contexts is particularly intriguing. It suggests a new research direction: might other noisy quantum devices be used to generate data enrichments or randomizations that improve machine learning models? Could quantum computers act as a kind of complex analog noise source that, when projected into data, helps algorithms escape local minima or overfitting traps? The HSBC-IBM team has essentially shown one example of this in finance. It raises questions about whether similar approaches could work for, say, fraud detection, option pricing, or other predictions in finance that struggle with high dimensionality and non-stationary patterns. Outside of finance, one could imagine quantum feature mapping being tested in fields like genomics or logistics – any domain where data is messy and complex and classical feature engineering hits a wall.
Of course, there’s a flip side: if quantum hardware noise was a key to success here, that somewhat flies in the face of the long-term goal of eliminating noise with fault tolerance. It could mean that today’s quantum computers have a quirky advantage precisely because they are noisy in a certain way that classical randomness isn’t. Whether that holds as hardware improves is an open question. Perhaps future, less-noisy quantum processors will still outperform because of richer entanglement and not just noise – or perhaps once we tame noise, we’ll need to artificially add some back in to retain this regularization effect! We’re in early days of understanding this phenomenon.
Critical Reflections and Looking Ahead
As a commentator and technologist watching this space, I find this development both exciting and humbling. On one hand, it’s undeniably exciting to see a quantum computer deliver a measurable edge on a real financial problem. The 34% improvement isn’t just a trivial tweak; it’s the kind of jump that trading teams dream about, often the difference between being a market leader or an also-ran.
The fact that HSBC is publicly celebrating this trial suggests they see genuine commercial promise – not in some distant future, but in the near term. It validates years of work by quantum computing researchers who have argued that hybrid quantum-classical approaches can yield benefits before we hit full scalability. It also might light a fire under competitors; Wall Street and the City of London will surely take notice that HSBC got a head start here. Will other banks now rush to partner with quantum providers, fearing they might get left behind if they don’t explore similar use cases? The arms race for quantum talent and partnerships in finance may be entering a new phase.
On the other hand, it’s humbling and important to stay critical. The result, impressive as it is, prompts many questions. Why exactly did the quantum features work so well? Is it something fundamentally quantum – entangling 100+ qubits to capture a complex correlation that classical feature engineering missed – or could a clever classical technique achieve similar smoothing or embedding of the data? The authors themselves admit they don’t have a full explanation and that more research is needed to pin down the source of the advantage. It’s also worth noting that while 34% is the headline, that was under ideal conditions (the longest circuit, no time gap between train and test). The advantage dipped as the “blinding” window increased, though it remained above classical. So in a truly live continuously evolving market, the edge might be a bit less dramatic, albeit still present.
We should also consider the cost-benefit: running large numbers of quantum circuit shots (they used 4,096 shots per data point for the measurements) on today’s hardware is not cheap or fast. The trial was a batch job, not something happening in milliseconds. If this were deployed, say, weekly or overnight to update models, is the infrastructure ready and economical? Perhaps yes, as cloud quantum services mature, but it’s a new kind of IT workflow to manage alongside classical HPC resources. The good news is the integration looked seamless in this case – the quantum part was bolt-on, so existing trading systems wouldn’t need a teardown to try it. That bodes well for adoption: if similar gains appear in other use cases, firms might be able to plug in quantum features relatively easily.
In conclusion, HSBC and IBM’s demonstration is a landmark for quantum computing in finance. It showed concrete evidence that even today’s quantum computers can complement classical systems to solve a problem better than we knew how to do before. It doesn’t mean we’ve achieved a broad quantum supremacy, and it doesn’t mean every trading problem will see the same boost. But it expands the horizon of what’s possible in the here-and-now. As someone who has followed quantum technology’s promises for years, seeing a real deployment with real data is gratifying. The result invites both enthusiasm and scrutiny: enthusiasm that perhaps we’ve found the first glimmer of quantum advantage for business, and scrutiny to understand it deeply and ensure it wasn’t a one-off quirk.