Industry & Ecosystem NewsResearch News

Jiuzhang 4.0: 3,050 Photons, 25.6 Microseconds, and a Direct Answer to the Algorithm That Threatened Photonic Quantum Advantage

In 2024, a classical algorithm nearly killed the case for photonic quantum supremacy. A year later, China’s photonic quantum program has answered – with a machine so far beyond classical reach that the comparison has become almost absurd.

15 Aug 2025 – A research team led by Pan Jianwei and Lu Chao-Yang at the University of Science and Technology of China (USTC), in collaboration with Tsinghua University and Jiuzhang Quantum Technology Co. Ltd., has demonstrated the largest photonic quantum advantage experiment ever conducted. Their new processor, Jiuzhang 4.0, injected 1,024 high-efficiency squeezed states of light into a programmable 8,176-mode photonic circuit and detected up to 3,050 photons in a single run – more than ten times the 255 photons achieved by its predecessor, Jiuzhang 3.0, just two years ago.

The results, posted as a pre-print on arXiv on August 12, 2025, establish what the researchers call a “robust and overwhelming” quantum computational advantage. The Jiuzhang 4.0 processor generates a single Gaussian boson sampling (GBS) output in 25.6 microseconds. To reproduce the same result using the most advanced classical algorithm (the matrix product state (MPS) method) running on El Capitan, currently the world’s most powerful supercomputer, would take more than 10⁴² years. For perspective, the universe is roughly 1.4 × 10¹⁰ years old. The classical computation would require roughly 10³² times the age of the universe.

Why this result matters now

The timing is not coincidental. This experiment was built to answer a specific threat.

In 2024, a team led by Changhun Oh and Liang Jiang published a paper in Nature Physics demonstrating that matrix product state methods could exploit photon loss — the biggest weakness of photonic quantum computers — to efficiently simulate earlier GBS experiments classically. Their algorithm essentially showed that when photons get lost during computation (as they inevitably do in any optical system), the remaining quantum state loses entanglement, and a classical computer can track what’s left using tensor networks. The implication was stark: the quantum advantage demonstrated by earlier Jiuzhang experiments, and by Xanadu’s Borealis, might not survive better classical algorithms.

This was not a theoretical worry. The MPS approach worked. It could simulate configurations matching earlier experiments in feasible time on existing supercomputers. For a moment, the entire photonic quantum advantage paradigm was under serious pressure.

Jiuzhang 4.0 is the direct experimental response.

The technical leap

The processor’s architecture represents a fundamental redesign compared to earlier Jiuzhang iterations. At its core are four optical parametric oscillators that generate single-mode squeezed states of light — quantum states where the uncertainty in one property of the light is reduced below the vacuum level, at the cost of increased uncertainty in the complementary property. These squeezed states are the fuel of Gaussian boson sampling.

Three key innovations drive the scale-up:

High-efficiency squeezed light sources. The system achieves 92% efficiency in squeezed state generation — a significant improvement over earlier versions. The team used cascaded unbalanced Mach-Zehnder interferometers with 99.8% transmission and greater than 40 dB extinction ratio to filter out unwanted spectral modes, ensuring the purity of the quantum light entering the circuit.

Spatial-temporal hybrid encoding. Rather than building an impossibly large optical circuit with thousands of physical beam splitters (as earlier Jiuzhang machines attempted in a purely spatial encoding), Jiuzhang 4.0 uses a combination of spatial modes and temporal delay loops. Photons are spread across both space and time, with delay loops effectively multiplying the number of modes accessible from a smaller physical circuit. This hybrid approach allows 16 physical detection channels to access 8,176 effective quantum modes — a cubic scaling of connectivity that would be unachievable with spatial encoding alone.

Programmability. Unlike the original Jiuzhang, which had a fixed optical circuit, Jiuzhang 4.0 is programmable. The spatial-temporal hybrid encoding circuit can be reconfigured, enabling different computational tasks. This addresses one of the longstanding criticisms of the Jiuzhang program — that its earlier iterations were single-purpose devices that could only perform one specific boson sampling configuration.

Defeating the classical spoofing algorithms

The paper explicitly benchmarks Jiuzhang 4.0 against every known classical simulation strategy, not just the MPS method. The team tested their experimental outputs against thermal states, distinguishable photons, squashed states, and uniform distributions — all of which are classical models that could, in principle, mimic the statistics of a poorly performing quantum device.

In every case, the quantum processor’s outputs were clearly distinguishable from the classical imitations. The team used Bayesian hypothesis testing, correlation function analysis, and direct comparison of photon-number distributions to validate their results.

But the headline confrontation is with the MPS algorithm. The researchers estimated the computational resources required to construct the tensor network that the MPS method would need to simulate their largest experiment (the L1024 configuration: 1,024 input squeezed states, 8,176 output modes). Even on El Capitan — which supplanted Frontier as the world’s fastest supercomputer — the MPS algorithm would need more than 10⁴² years just to build the required tensor network, let alone sample from it. The quantum processor produces one sample in 25.6 microseconds.

The paper claims an overall quantum computational advantage factor of approximately 10⁵⁴ over the state-of-the-art classical algorithms — a number so large that incremental improvements in classical simulation are unlikely to close the gap.

How the Jiuzhang series has evolved

VersionYearMax detected photonsModesQuantum speedup factorClassical benchmark
Jiuzhang 1.0202076100~10¹⁴TaihuLight supercomputer
Jiuzhang 2.02021113144~10²⁴Classical brute force
Jiuzhang 3.02023255~144 (with pseudo-PNR)~10¹⁶Frontier supercomputer
Jiuzhang 4.020253,0508,176~10⁵⁴El Capitan + MPS algorithm

The jump from 255 to 3,050 detected photons is the largest single-generation leap in the Jiuzhang series, enabled primarily by the shift to spatial-temporal hybrid encoding and the dramatically improved squeezed state efficiency. The Hilbert space dimension of the largest experiment — the mathematical space of all possible outcomes — is approximately 10²,⁴⁶¹, a number so large it defies any physical analogy.

My Analysis — Two Quantum Tracks, One Strategic Direction

When I covered the original Jiuzhang experiment and then Jiuzhang 3.0 in 2023, I noted that China’s photonic quantum program occupied an unusual position in the global landscape. It was spectacularly fast at one very specific task – Gaussian boson sampling – but that task had limited direct practical application. The machines were not programmable. They couldn’t run Shor’s algorithm or simulate molecules. And there was a growing chorus of critics, particularly after the MPS paper in 2024, who argued that photon loss meant these experiments would never scale to genuine, lasting quantum advantage.

Jiuzhang 4.0 has substantially changed this picture.

The loss problem, addressed head-on

The most important thing about this result is not the speed or the photon count. It’s that the team directly confronted the classical simulation threat and won.

Photon loss has always been the Achilles’ heel of photonic quantum computing. Every beam splitter, every fiber connection, every detector absorbs or misses some fraction of the photons passing through. In earlier Jiuzhang experiments, roughly 70% of photons were lost before detection – a figure that made many theorists skeptical about whether the remaining quantum state retained enough entanglement to resist classical simulation.

The MPS algorithm of Oh et al. (2024) formalized this skepticism. It showed that when detected photon numbers scale as the square root of input photon numbers – the regime that lossy optical systems naturally inhabit – the quantum state can be efficiently approximated by a classical tensor network. This was a rigorous, publishable argument that photon loss was eating away the computational advantage from the inside.

Jiuzhang 4.0 answers this by brute-forcing past the regime where MPS works. By achieving 92% source efficiency and dramatically scaling up the input (1,024 squeezed states), the experiment pushes so far into the high-photon, high-entanglement regime that even the most efficient tensor network decomposition cannot keep up. The detected photon count of 3,050 is not just larger than earlier experiments – it is in a fundamentally different scaling regime where the MPS bond dimensions required for simulation grow beyond any feasible computation.

Is this the last word? Almost certainly not. The history of quantum advantage claims is a history of leapfrogging between better quantum hardware and cleverer classical algorithms. Someone, somewhere, is already looking for a new classical attack on this result. But the margin is now so vast – 10⁵⁴ – that even a dramatic improvement in classical methods would barely dent it.

Two tracks, one laboratory

What makes China’s quantum program uniquely formidable is that Jiuzhang 4.0 is not an isolated achievement. It exists alongside Zuchongzhi 3.0, which set the superconducting quantum advantage record in March 2025 with a 10¹⁵-fold speedup over classical supercomputers. And alongside Zuchongzhi 3.2, which in December 2025 became the first processor outside the United States to demonstrate below-threshold quantum error correction.

These are three world-class results, in two fundamentally different quantum computing modalities, from the same national program, in the same year. No other country has demonstrated quantum advantage in two different hardware platforms simultaneously. The United States leads in superconducting error correction (Google’s Willow) and has strong photonic programs (Xanadu, PsiQuantum), but these are separate companies with separate roadmaps, not a coordinated national effort operating on both tracks in parallel.

The strategic significance of this dual-track approach is easy to miss if you focus only on the specific benchmarks. Gaussian boson sampling is not a general-purpose computation. Random circuit sampling is not a general-purpose computation. Neither machine can break encryption or simulate new drugs. But together, they demonstrate something more important: the ability to design, build, calibrate, and operate quantum systems at the very frontier of what physics allows, across multiple hardware paradigms, with world-class classical simulation teams providing the adversarial benchmarks to ensure the results are genuine.

That is the foundation from which cryptanalytically relevant quantum computers will eventually be built.

The path from GBS to something useful

The conventional criticism of Gaussian boson sampling is that it doesn’t do anything useful. This was true of the original Jiuzhang and remains partially true of Jiuzhang 4.0. GBS itself is a mathematical benchmark – it demonstrates that a quantum system can produce samples from a distribution that classical computers cannot efficiently replicate. But sampling from a hard-to-compute distribution is not the same as solving a useful problem.

However, the Jiuzhang 4.0 paper hints at where this technology is heading. The development of low-loss squeezed light sources and programmable spatial-temporal hybrid encoding circuits, the authors note, “not only immediately allows us to control 3D massive highly-entangled cluster states in the near future, but also paves the way towards the next generation of fault-tolerant photonic quantum computing hardware.”

This is not throwaway language. Cluster states – highly entangled multi-mode quantum states – are the resource states for measurement-based quantum computation, a universal model of quantum computing where computation proceeds by measuring individual qubits in an entangled lattice rather than by applying gates in sequence. If the Jiuzhang team can generate, maintain, and manipulate the kind of large-scale entangled states their squeezed light technology enables, the path to a universal photonic quantum computer becomes clearer.

PsiQuantum and Xanadu have been working toward this goal from the commercial side. But neither has demonstrated entanglement at the scale that Jiuzhang 4.0 operates. If the USTC team pivots from GBS benchmarking toward cluster state generation – and the paper suggests they intend to – the photonic quantum computing race could shift significantly.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap