Payments and the Race to Quantum Safety / Post-Quantum Cryptography (PQC)
Table of Contents
Introduction
It is December 11, 2025, and inside the Bank for International Settlements’ (BIS) Eurosystem Centre, a small team of cryptographers and payment system engineers is staring at a set of test results that will quietly reshape how the global financial industry thinks about its future. For months, they have been running post-quantum cryptographic signatures through the Eurosystem’s TARGET2 – the real-time gross settlement (RTGS) system that moves an average of over two trillion euros daily between Europe’s central banks. The experiment, known as Project Leap Phase 2, has just concluded. The good news: every single test passed. PQC-signed liquidity transfers flowed correctly between the Bank of Italy, the Banque de France, and the Deutsche Bundesbank. Invalid signatures were properly rejected. The math works.
The bad news is in the performance data. Signature verification – the operation that every single payment message must survive before it can settle – now takes 209.9 milliseconds on average, compared to 28.1 milliseconds with the RSA signatures the system was built to handle (based on software-based testing without HSMs). That is a 7.5× slowdown. Worse, the new signatures are nearly ten times larger than their predecessors, and they exceeded expected buffer sizes in the message-handling logic that TARGET2’s engineers never anticipated. The hybrid approach – running old and new cryptography side by side, as every regulator recommends – turns out to require what the BIS report calls “substantial evolution of the system,” as “hybridisation was not envisaged in the original cryptographic design.”
There is also the matter of the PKI dependency that nobody had fully anticipated. During testing, a perfectly valid, correctly PQC-signed liquidity transfer message sailed through early validation – the cryptography was verified, the format was correct, the payment instruction was sound – but it could not complete settlement. The reason: the corresponding digital certificate was missing from TARGET2’s static reference data. The payment system did not just need new algorithms; it needed an entire parallel infrastructure for distributing, storing, and validating PQC certificates across every participant in the network. The “cryptography problem,” it turned out, was really a PKI and reference data problem – a distinction that matters enormously for migration planning.
Project Leap did not fail. It succeeded in precisely the way that matters most: by exposing the real engineering problems before they become real-world crises. And those problems, it turns out, are not the problems most people expected.
The Migration That Touches Everything
The payments industry has navigated big cryptographic transitions before. The migration from magnetic stripes to EMV chips took the better part of two decades and cost billions. The shift from SHA-1 to SHA-256 certificates was painful but bounded – it mostly meant updating software, not ripping out hardware. The post-quantum transition is different in kind, not just degree. It touches every layer of the payments stack simultaneously: the silicon inside a contactless card, the hardware security modules in bank data centers, the message formats that glue the global system together, and the settlement infrastructure operated by central banks.
To understand why, consider what happens when you tap your card at a coffee shop. That seemingly instantaneous transaction triggers a cascade of cryptographic operations spanning multiple organizations, networks, and jurisdictions. Your card generates a cryptogram using keys stored in its secure element. The terminal authenticates the card using a certificate chain rooted in the card network’s certificate authority. The authorization request travels through the acquirer’s network to the issuer, wrapped in TLS sessions secured by RSA or ECC key exchange. If the issuer approves, the settlement eventually flows through interbank networks – SWIFT, Fedwire, TARGET2 – each with their own certificate hierarchies, HSM dependencies, and signature requirements. As I previously mapped in detail, a single cross-border payment can touch dozens of distinct quantum-vulnerable cryptographic operations, each managed by a different entity with its own upgrade cycle.
Every one of those asymmetric cryptographic operations – the key exchanges, the digital signatures, the certificate validations – is mathematically breakable by a sufficiently powerful quantum computer running Shor’s algorithm. The symmetric cryptography (AES, 3DES) used for PIN encryption and transaction cryptograms will survive the quantum era with doubled key lengths. But the public-key infrastructure that binds the entire system together will not. And here is the detail that separates payments from most other industries facing PQC migration: the cryptographic surface is not owned by a single entity. Your bank controls its HSMs. The card network controls its certificate authority. SWIFT controls its message signing. The central bank controls the settlement system. Each operates on independent upgrade cycles, with independent vendors, independent certification requirements, and independent risk appetites. Migrating one without coordinating with all the others is like replacing the foundation of a house while the neighbors are still leaning on the shared wall.
The numbers illustrate the uniqueness of the challenge. A typical enterprise PQC migration might involve hundreds of cryptographic touchpoints. My previous analysis shows that a large payments institution could face over 120,000 discrete program tasks – a figure that encompasses not just algorithm replacement but testing, certification, coordination with counterparties, and regression across legacy systems that nobody wants to touch.
The timelines have sharpened considerably. NIST’s draft IR 8547 proposes deprecating RSA and ECC at 112-bit security after 2030 and disallowing all quantum-vulnerable asymmetric cryptography after 2035. The G7 Cyber Expert Group’s January 2026 roadmap targets 2030–2032 for critical financial system migration. The EU’s coordinated plan requires member states to establish national roadmaps by end of 2026. Singapore’s MAS has issued advisory guidance (Circular MAS/TCRS/2024/01) recommending cryptographic asset inventories and migration strategies. The HKMA announced a Quantum Preparedness Index in February 2026 to score banking sector readiness. The UK NCSC set phased targets: crypto discovery by 2028, high-priority systems by 2031, full transition by 2035.
The consensus is unmistakable: the payments industry should begin migrating its most critical systems to post-quantum cryptography. For an industry that took twenty years to deploy EMV chips, these are extraordinarily compressed timelines – made more urgent by the fact that “harvest now, decrypt later” attacks mean adversaries are almost certainly recording encrypted financial data today for retroactive decryption.
And the stakes? The Citi Institute’s January 2026 report, drawing on the Hudson Institute’s econometric modeling, put a number on them: a quantum-enabled attack on a top-five U.S. bank’s Fedwire access could cause $2.0–3.3 trillion in indirect economic losses – 10–17% of annual U.S. GDP – and trigger a six-month recession through cascading liquidity failures and frozen payments.
This is not a technology problem that happens to affect payments. It is a payments problem that happens to require new technology. And the devil, as always, is in the details of implementation. So let’s dig into seven payments-specific challenges that make this migration uniquely difficult.
Challenge 1: The Signature Size Explosion and Broken Message Formats
The problem. The single most disruptive consequence of post-quantum cryptography for payments is not computational speed – it is data size. An ML-DSA-44 digital signature occupies 2,420 bytes. The ECDSA signature it replaces is 64 bytes. That is an approximate 37.8× increase. ML-KEM-768 public keys are 1,184 bytes with 1,088-byte ciphertexts. A hybrid PQC-enabled TLS handshake balloons from approximately 1.2 KB to 14.7 KB. For payments, this is structurally breaking.
To grasp the severity, you need to understand how deeply the current size assumptions are baked into payment infrastructure. ISO 8583, the compact binary message format that has underpinned card payment authorization worldwide for decades, was designed in an era when every byte mattered – when messages traveled over dial-up connections and satellite links where bandwidth was expensive and latency was measured in seconds. Its predefined field sizes typically cap at 256 bytes for authentication data. An ML-DSA signature does not fit. It is not close to fitting.
The implications cascade. The standard would need fundamental redesign – new field definitions, new length encoding, new buffer allocations across every switch, gateway, and payment processor in the global card network. Every middleware component, every message parser, every logging system that assumes a certain maximum message size would need updating. Payment processors that have spent decades optimizing their ISO 8583 parsing logic for speed – shaving microseconds by hardcoding field offsets – would find those optimizations broken. Database schemas designed around current field sizes would need restructuring. Archive and compliance systems that store transaction records would face storage multiplication. The testing burden alone – regression testing every downstream system that touches a modified message format – would be enormous.
ISO 20022, the XML-based format that SWIFT is migrating the world to, is more extensible – it can technically accommodate larger payloads. But “technically” does less work than you might think. BIS Project Leap Phase 2 tested PQC signatures at the ISO 20022 Business Application Header level, where a CRYSTALS-Dilithium (Round 3, NIST Level 3) signature of 3,293 bytes replaced an RSA-2048 signature of 256 bytes – a ~12.9× increase within the message header alone. (Notably, the BIS team was unable to test the more recent standardized ML-DSA variant.) The larger payloads exceeded expected buffer sizes in TARGET2’s message-handling logic. Headers exceeded expected buffer sizes. The legacy ESMIG connector used in TARGET2 was ill-prepared for the larger payloads.
A certificate chain with ML-DSA could add tens of kilobytes per transaction. At thousands of transactions per second, this aggregates into enormous additional bandwidth and storage demands across the settlement infrastructure.
Emerging approaches. The honest assessment is that solutions here are still early. No dedicated published analysis of field-level PQC impact on ISO 8583 exists – a striking gap given the format’s centrality to global card payments. The X9 Accredited Standards Committee has published a Post-Quantum Cryptography Financial Readiness Needs Assessment covering both ISO 8583 and ISO 20022 financial messaging, but the hard engineering of accommodating larger payloads across thousands of institutions remains ahead.
SWIFT’s announcement that SwiftNet 8.0, targeted for 2027, will be PQC-enabled is the most concrete timeline commitment from a messaging infrastructure provider. The 15-month migration window SWIFT has indicated signals the organization recognizes this cannot be done overnight.
The most promising technical approach for the near term is what some researchers are calling “signature compression with out-of-band certificate distribution” – caching PQC certificates at endpoints rather than transmitting full certificate chains with every message. This would dramatically reduce per-transaction overhead but requires coordinated infrastructure for certificate distribution and revocation that does not yet exist in payments.
NIST’s ongoing additional signature standardization round could also help. Several candidate algorithms under evaluation – notably SQIsign and HAWK – offer substantially smaller signature sizes than ML-DSA, though with trade-offs in verification speed. If one of these makes it to standardization (expected 2027–2028), it could provide a more message-format-friendly option for constrained environments. But betting a migration plan on algorithms not yet standardized is a risky strategy.
The practical path for most payment institutions is to begin ISO 20022 migration now (if not already underway) and to architect new message-handling systems with generous buffer sizes and extensible field definitions – what amounts to building in “PQC headroom” even before PQC algorithms are deployed.
Challenge 2: The Smart Card Memory Crisis
The problem. If message formats are the most structurally breaking challenge, smart cards are the most physically constraining one. Nearly fourteen billion payment cards are in circulation worldwide. Pick one up and look at it: that small gold or silver contact patch, smaller than a postage stamp, conceals a system-on-a-chip that must perform public-key cryptography under constraints that would make most software engineers wince. The high-end ones run a 32-bit single-core processor at 100 MHz with 48 KB of RAM – roughly the computational power of a 1990s graphing calculator – communicating at under 100 KB/s, with a contactless transaction time budget of under 300 milliseconds. That 300-millisecond window is not arbitrary; it is the threshold beyond which a “tap and go” payment starts to feel like a “tap and wait,” generating the kind of consumer friction that makes card issuers nervous. Current chips have hardware accelerators for AES and RSA/ECC but none for post-quantum algorithms, and many PQC schemes spend 40–70% of execution time on hashing operations for which these accelerators provide no benefit.
IDEMIA, one of the world’s largest smart card manufacturers, has done the math and presented the results at NIST’s PQC standardization conferences. Classic McEliece, the code-based encryption scheme, requires over 70 KB of RAM – more than the total memory available on a payment card. Falcon demands over 25 KB plus masking overhead. Only lattice-based algorithms (ML-KEM and ML-DSA) are even theoretically feasible on current hardware, and side-channel protection – essential for payment cards, where physical attackers can probe power consumption and electromagnetic emissions – multiplies execution time by 2× to 5.6× depending on the countermeasure.
IDEMIA’s 2022 implementation of hybrid PQC EMV protocols on banking smart cards demonstrated feasibility but revealed that EMV commands’ 256-byte data transfer limit required creating entirely new extended commands to handle PQC signature sizes. The FS-ISAC’s payment card guidance explicitly notes that “it is not known if EMV is able to accommodate quantum-safe certificates” – a remarkable admission from an industry body, essentially saying the existing card payment standard may not be fixable for the quantum era.
Emerging approaches. The most significant hardware development is IDEMIA’s partnership with GlobalFoundries to produce a new 28nm smart card chip (the GF 28ESF3 platform) specifically designed for PQC, with mass production targeted for 2026. The jump from 40nm to 28nm process technology should provide significantly more transistors – and therefore more RAM and computational headroom – within the same silicon area and power envelope. Infineon has also demonstrated PQC on commercially available contactless chips, though the specific performance and memory profiles have not been publicly detailed.
Mastercard has taken a different approach with its Enhanced Contactless (Ecos) specification, which approved the first quantum-resistant contactless payment cards in October 2022, manufactured by Giesecke+Devrient and Thales. An important nuance: Ecos relies primarily on AES symmetric encryption (supporting 128, 192, and 256-bit keys, with AES-256 providing quantum resistance) rather than NIST post-quantum asymmetric algorithms. This makes the “quantum-resistant” label technically accurate but sidesteps the harder problem of PQC asymmetric authentication on constrained hardware.
The broader approach emerging across the industry is a pragmatic triage. Online transactions – which represent the vast majority of card payments – can rely on symmetric cryptographic authentication (AES-based cryptograms) that is already quantum-safe. The real vulnerability is in offline authentication scenarios: transit systems, aircraft, cruise ships, anywhere a card must prove its authenticity to a terminal without an online connection back to the issuer. For these scenarios, the migration path likely runs through new-generation silicon (IDEMIA/GlobalFoundries 28nm, Infineon equivalents), updated EMV specifications to support PQC certificates, and a multi-year card and terminal replacement cycle. The 20-billion-device replacement estimate frequently cited in industry analyses begins to feel less like hyperbole and more like accounting.
Challenge 3: The HSM Certification Bottleneck
The problem. If smart cards are the most physically constrained element of the payment ecosystem, Hardware Security Modules are the most operationally critical. These tamper-resistant boxes – typically rack-mounted, physically hardened, designed to zeroize their contents if someone tries to open the casing – are the root of trust for every payment cryptographic operation. Every digital signature, every key exchange, every PIN verification in a payment chain ultimately depends on an HSM-protected key. Banks do not just use HSMs; they are defined by them. An HSM failure in a critical payment system can halt settlement, freeze card issuance, and trigger regulatory incident reports. Upgrading them is correspondingly fraught – it is the operational equivalent of performing heart surgery while the patient runs a marathon.
SWIFT mandates FIPS 140-2 Level 2+ certified HSMs (Level 3 for qualified certificates). PCI compliance requires its own separate HSM validation. And here is the critical finding for any payments CISO reading this: as of early 2026, no HSM vendor has completed a FIPS 140-3 Level 3 CMVP validation that includes PQC algorithms within the validated module boundary.
That bears repeating. The certification infrastructure has not caught up with the technology. You can buy HSMs that support ML-KEM and ML-DSA today – several vendors ship firmware with native PQC support. But if your regulator or auditor requires FIPS 140-3 validation that explicitly covers the PQC algorithms (as opposed to “the HSM is FIPS-validated and also happens to support PQC outside the validation boundary”), you cannot currently deploy PQC in production for regulated payment operations. Based on industry timelines, the first such validation may not arrive until 2027.
Emerging approaches. The HSM vendors are racing, and the landscape has evolved rapidly through 2025. Thales Luna was the first HSM family to achieve FIPS 140-3 Level 3 validation, with the Luna K7 module receiving certificate #4684 in April 2024 and the Luna G7 module receiving certificate #4962 subsequently. Luna firmware v7.9, released mid-2025, delivers native ML-KEM and ML-DSA support across the Luna 7 HSM family. FIPS 140-3 validation including PQC is in progress. Entrust nShield 5 achieved FIPS 140-3 Level 3 in August 2024. Firmware v13.8.0 provides native ML-DSA support, with ML-KEM added in v13.8.3 and SLH-DSA in v13.9. CAVP (algorithm-level) validation was achieved September 2025. Notably, nShield 5 includes an FPGA-based crypto accelerator designed specifically for hardware-accelerated PQC operations – a recognition that software-only PQC in payment environments may not meet latency requirements. Utimaco’s Atalla AT1000 became the first payment HSM to receive FIPS 140-3 certification in June 2025, with its GP HSM Se-Series supporting ML-KEM, ML-DSA, LMS, and XMSS.
The most payment-relevant milestone belongs to Futurex, which in June 2025 became the only HSM supporting PQC to have been PCI HSM validated by the PCI Security Standards Council. This PCI-specific validation is distinct from FIPS 140-3 and directly addresses payment compliance requirements. For payment institutions that need to demonstrate PQC readiness within the PCI framework today, Futurex is currently the only certified option.
The practical approach for payment institutions in 2026 is a two-track strategy. First, begin testing PQC algorithms using existing HSM firmware in non-production environments – all major vendors now support this. Second, plan production deployment timelines around the anticipated FIPS 140-3 PQC validation dates, likely 2027, while using Futurex’s PCI HSM validation as a payment-specific compliance bridge where applicable. PostQuantum.com’s analysis predicts a surge in demand for specialized FPGA or ASIC offload cards for ML-DSA verification at line speed – a market that Entrust’s FPGA-equipped nShield 5 is positioning to serve.
Challenge 4: The Hybrid Architecture Trap
The problem. Every regulator, every industry framework, every standards body recommends hybrid cryptography as the transition approach – running a classical algorithm (RSA or ECC) alongside a PQC algorithm simultaneously, so security holds as long as either component remains unbroken. The logic is impeccable: hybrid hedges against the possibility that either the classical or the post-quantum algorithm proves weaker than expected. It is insurance against the unknown unknowns.
In theory, hybrid is the conservative, safe choice. In practice, Project Leap Phase 2 demonstrated that most payment system architectures cannot actually support it without major redevelopment.
The core issue is architectural rigidity. Payment systems were not designed to be cryptographically flexible. They were designed to be fast, reliable, and deterministic – and those qualities were achieved precisely by eliminating options and variability. Message ingestion pipelines, gateways, and validation modules expect exactly one signature verification path, one certificate format, one set of buffer assumptions. Change any of those assumptions and you do not get a graceful degradation; you get a system that rejects messages, drops transactions, or – in the worst case – processes payments it should have rejected.
TARGET2’s ESMIG signature verification software, at the Business Application Header level, accepts only one single cryptographic algorithm. Simultaneous validation of traditional and PQC signatures was not feasible without what the BIS report called “substantial evolution of the system.” The team implemented a workaround – a dual-path approach splitting incoming messages between separate RSA and PQC verification engines – but concluded that hybrid mode doubles computation and massively increases data payloads. TARGET2’s existing design “could not easily accommodate” hybrid mode without substantial redevelopment.
This is not unique to TARGET2. It is the natural consequence of how payment systems have been engineered for decades: optimized for throughput and reliability, not cryptographic flexibility. The same architectural pattern – single-algorithm validation paths, hardcoded buffer sizes, middleware that assumes a specific cryptographic envelope – exists in Fedwire, in SWIFT messaging gateways, in card network authorization switches, and in countless institutional payment processing systems worldwide. Each one will need its own version of the dual-path workaround that Project Leap improvised.
Emerging approaches. The industry is converging on the concept of crypto-agility – designing systems that can swap cryptographic algorithms without code rewrites. Mastercard’s October 2025 R&D white paper, advocates cryptographic agility as the “cornerstone of quantum readiness” and recommends building algorithm-selection abstraction layers into payment infrastructure.
The practical architecture pattern emerging is what the BIS team implemented as a workaround: parallel verification services. Rather than attempting to squeeze hybrid validation into existing single-algorithm pipelines, institutions should deploy dedicated PQC verification engines alongside existing classical verification, with an orchestration layer that routes messages through both paths. This is heavier than a simple algorithm swap, but it preserves backward compatibility during the transition period and can be scaled independently.
IETF has standardized hybrid terminology through RFC 9794 (June 2025) and is progressing drafts for hybrid ECDH+ML-KEM for TLS 1.3. Chrome, Signal, and Apple iMessage have already deployed hybrid PQC key exchange in production – demonstrating that hybrid works at massive scale in communication protocols, even if payment settlement architectures remain harder to adapt.
AWS Payment Cryptography announced PQC support across its services, including AWS KMS hybrid ML-KEM key exchange with a reported 0.05% throughput reduction, while AWS Payments Cryptography added PQC support for data in transit in November 2025 – evidence that cloud-based payment processing can absorb the hybrid overhead more gracefully than on-premise legacy systems. For institutions with flexibility in their infrastructure model, cloud-based payment processing may offer a faster path to hybrid PQC than retrofitting on-premise settlement systems.
Challenge 5: The Settlement Latency Crisis
The problem. Real-time gross settlement systems – Fedwire, TARGET2, CHAPS, TIPS, FedNow – are the arteries of the global financial system. Fedwire alone settles over $4 trillion daily. These systems process transactions measured in microseconds to milliseconds, with strict throughput requirements and liquidity optimization algorithms that depend on rapid settlement. A few extra milliseconds per transaction might sound trivial, but when multiplied by hundreds of thousands of daily settlement messages, the cumulative impact reshapes liquidity windows, delays netting cycles, and potentially forces institutions to hold larger reserve balances to compensate for processing uncertainty.
The 7.5× slowdown in signature verification measured in Project Leap Phase 2 (209.9 ms versus 28.1 ms) has direct consequences for settlement windows. At thousands of transactions per second, cumulative data bloat from ~10× larger signatures compounds bandwidth and processing demands. Consider what this means in practice: a central bank that currently processes the day’s settlement queue in a defined window would need either substantially more time or substantially more computing capacity to achieve the same throughput with PQC signatures. During peak settlement periods – end-of-month, end-of-quarter, financial stress events – the additional processing overhead could compress already-tight timing margins.
For real-time payment schemes like FedNow and TIPS, where end-to-end latency commitments are tight, PQC introduces a performance degradation that cannot simply be absorbed. And a critical detail from the BIS report: Project Leap Phase 2 replaced physical HSMs with software-based key files for testing flexibility. This means the HSM performance question – which could introduce additional latency in production – remains entirely open.
The computational story is more nuanced than the headline numbers suggest. ML-DSA-44 verification is substantially faster than RSA-3072 verification, and ML-KEM operations complete in single-digit milliseconds even on embedded devices. However, ML-DSA-44 signing is significantly slower than ECDSA-P256 signing – roughly 3–24× slower depending on implementation and platform. The speed advantage of lattice-based schemes lies primarily in verification, not signing. The speed problem is specific to verification in software implementations without hardware acceleration – which is exactly the scenario most payment systems will encounter first.
Emerging approaches. The response is emerging along three fronts. First, hardware acceleration: Entrust’s FPGA-equipped nShield 5 is the first commercially available answer to the verification speed problem, and the broader HSM industry is likely to follow. I have argued that without hardware acceleration, the G7’s goal of a secure and efficient financial system “may be mathematically impossible” on current general-purpose CPUs – a provocative claim, but one supported by the Project Leap data.
Second, algorithmic optimization. Researchers continue to improve the software performance of lattice-based algorithms, and NIST’s additional signature standardization candidates include schemes optimized for verification speed. The gap between benchmark performance and the Project Leap measurements also suggests that implementation optimization – tuning cryptographic libraries for payment processing workloads, optimizing memory allocation, parallelizing verification – could recover significant performance.
Third, architectural accommodation. Batch settlement systems have more tolerance for increased processing time than real-time systems, and institutions may need to restructure some settlement workflows to provide PQC-compatible latency budgets. This is uncomfortable – nobody wants to slow down settlement – but for systems that cannot achieve hardware acceleration quickly, it may be the pragmatic interim solution.
Challenge 6: The Coordination Problem Across a Fragmented Ecosystem
The problem. A payments CISO cannot unilaterally migrate to PQC. If your issuing bank deploys PQC signatures on card authentication certificates, but the acquiring bank’s terminals cannot verify them, the transaction fails. If SWIFT migrates to PQC message signing, but your institution’s HSMs are not yet upgraded, you are disconnected from the global payment network. If one card network mandates PQC and another does not, processors must maintain parallel cryptographic stacks indefinitely.
This coordination challenge is compounded by the divergent positions of key industry bodies – and by the fact that the industry has not even completed its previous cryptographic migration. EMV is still in the process of migrating from RSA to ECC, a transition expected to complete around 2030. The PQC migration is arriving before the ECC migration is finished, creating a telescoping problem: some institutions may need to skip ECC entirely and go straight from RSA to PQC, while others will deploy ECC only to begin replacing it within a few years. This complicates planning, stretches vendor capacity, and creates interoperability hazards during a prolonged period where three generations of cryptography coexist in the same ecosystem.
EMVCo takes a notably conservative position, stating it does not expect quantum computing to threaten EMV infrastructure until at least 2040. The organization argues that online transactions using symmetric cryptography are already quantum-safe and that HNDL attacks are inconsequential for EMV because dynamic cryptograms expire after use. This stands in sharp contrast to the G7’s 2030–2032 critical systems timeline and the broader industry consensus. EMVCo’s 2040 estimate is an outlier that could create complacency in offline card authentication migration – precisely the area where EMV is most vulnerable.
Mastercard, meanwhile, has been the most aggressive card network on PQC, with quantum-resistant contactless cards since 2022, a substantial R&D white paper, and active participation in Europol’s Quantum Safe Financial Forum. Visa maintains a research track but has not matched Mastercard’s public product announcements. No specific PQC announcements have emerged from American Express, UnionPay, Discover, or JCB beyond their participation in EMVCo.
CLS (Continuous Linked Settlement) and CHIPS – critical U.S. payment systems – have not announced quantum readiness initiatives. These represent coordination gaps in the global payment infrastructure.
Emerging approaches. The most significant coordination mechanisms are the G7 Cyber Expert Group’s six-phase framework (awareness, inventory, planning, execution, testing, validation), Europol’s Quantum Safe Financial Forum published in January 2026 with a practical migration prioritization framework, and FS-ISAC’s PQC Working Group guidance for the payment card industry. These provide common frameworks, timelines, and vocabulary for cross-border coordination – the kind of scaffolding that makes synchronized migration possible.
JPMorgan Chase has taken a distinctive path, deploying the Quantum-secured Crypto-Agile Network (Q-CAN) – a QKD-secured network connecting two data centers over 29 miles of fiber in Singapore, achieving 45 days of continuous operation at 100 Gbps. JPMorgan pursues a dual remediation strategy incorporating both PQC and QKD, and achieved a certified quantum randomness milestone with Quantinuum published in Nature in March 2025. This positions JPMorgan as both a technology leader and a proof point that financial institutions can move beyond planning into deployment.
The coordination problem will ultimately be solved the way the EMV migration was solved: through a combination of regulatory pressure, industry standardization body consensus, and a few large players creating market facts that others must follow. Mastercard’s early moves on quantum-resistant contactless, SWIFT’s SwiftNet 8.0 commitment, JPMorgan’s Q-CAN deployment – these are not just technology experiments. They are strategic signals that reshape the competitive landscape. When a major card network offers quantum-resistant cards and a competitor does not, the competitive dynamic begins to exert pressure independent of regulatory mandates.
There is a precedent worth studying. In the early 2000s, the EMV migration was stuck for years in a coordination deadlock – issuers did not want to replace cards until merchants had EMV terminals, merchants did not want to invest in EMV terminals until issuers had distributed EMV cards, and nobody wanted to move first. What broke the deadlock was a liability shift: card networks announced that fraud liability would transfer to whichever party in a transaction had the lower security standard. The party without EMV paid for fraud. Suddenly, the coordination problem became an economic problem with a clear solution.
Something similar may be needed for PQC migration. A regulatory or industry liability framework that assigns quantum-related breach costs to the weakest cryptographic link in a payment chain would create powerful economic incentives for coordinated migration. No such framework exists yet. But the regulatory direction – G7, EU, MAS, HKMA – is clearly heading toward a world where quantum vulnerability becomes a compliance risk, and compliance risk has a well-established tendency to concentrate minds.
The question is whether the coordination happens proactively on a planned timeline or reactively after an incident or regulatory mandate concentrates minds.
Challenge 7: The Missing Regulatory Specificity
The problem. Regulators have set timelines and frameworks, but the payments industry still lacks specific, actionable guidance on several critical questions. PCI SSC has not published standalone PQC guidance – the closest relevant requirement is PCI DSS 4.0 Requirement 12.3.3, which requires a cryptographic inventory and monitoring of threats to current cryptographic algorithms. This is a sensible crypto-agility requirement, but it does not specify PQC algorithms, migration timelines, or testing requirements. Institutions are left to interpret “monitoring threats to current algorithms” as implicit PQC preparation.
DORA (the EU’s Digital Operational Resilience Act), applicable since January 2025, does not explicitly mandate PQC but its regulatory technical standards reference quantum threats and require monitoring of “cryptographic threats including those from quantum advancements.” Singapore’s MAS has mandated inventories and migration strategies but has not specified algorithm choices or compliance deadlines. No regulator has yet addressed the hard questions: When must PQC be deployed in production payment systems? Which algorithms are acceptable? What testing is required? What happens to institutions that miss the timeline?
Emerging approaches. The regulatory landscape is evolving rapidly, if unevenly. The EU’s coordinated PQC implementation roadmap (June 2025) requires member states to establish national roadmaps by end of 2026 – which will likely crystallize into more specific requirements for financial institutions. NIST’s IR 8547 deprecation timeline, once finalized (it remains in initial public draft), will provide a hard floor beneath which quantum-vulnerable algorithms cannot be used.
Europol’s Quantum Safe Financial Forum January 2026 framework assesses quantum risk across three parameters: data shelf life, exposure to attackers, and severity of business impact. This risk-based approach aligns with my advocacy for risk-driven strategies when full cryptographic inventory is not immediately feasible – an acknowledgment that perfect information should not be the enemy of progress.
The practical implication is that payment institutions should not wait for prescriptive regulation. The G7 framework, FS-ISAC guidance, and Europol prioritization methodology collectively provide sufficient direction to begin. Institutions that proactively implement these frameworks will find themselves ahead of eventual regulatory requirements, not scrambling to catch up.
What the Payments Industry Should Do Now
The challenges above are formidable. They are also solvable – not overnight, not cheaply, but solvable. No single institution can tackle all seven simultaneously with equal intensity. The art of PQC migration planning is triage: identifying which challenges bite first for your specific organization and addressing them in a sequence that builds capability without breaking operations. Here is what should be underway now, in 2026.
Start with the cryptographic inventory you can do, not the perfect one you cannot. PCI DSS 4.0 Requirement 12.3.3 already mandates this. Focus first on identifying systems that handle long-lived data (mortgage records, tokenized credentials, compliance archives), systems that handle high-value settlement flows, and systems with the longest upgrade cycles (HSMs, smart cards, embedded payment terminals). Do not let the impossibility of a comprehensive inventory prevent a risk-prioritized one.
Deploy hybrid TLS on internal connections immediately. This is the single highest-impact, lowest-risk action available. The X25519+ML-KEM-768 hybrid key exchange is already supported in major TLS libraries and incurs negligible performance overhead – AWS KMS documented 0.05% throughput reduction for hybrid ML-KEM key exchange. Protecting data in transit with hybrid PQC eliminates the HNDL risk for interbank communications, internal microservice traffic, and API connections without requiring changes to application logic or payment message formats.
Engage HSM vendors on specific PQC upgrade paths. Not all HSMs can be upgraded via firmware – some will require physical replacement. Understanding which models can accept PQC-capable firmware, what the certification timeline looks like, and what the per-unit cost of replacement will be is essential for budget planning. Futurex’s PCI HSM validation provides a payment-specific compliance path that should be evaluated. Begin non-production PQC testing on current HSM platforms.
Plan for ISO 8583 disruption. If your institution processes card transactions on ISO 8583, the message format will need significant modification to accommodate PQC signatures. This should be factored into technology roadmaps now, not discovered as a blocker later. Institutions still planning ISO 8583 systems should strongly consider accelerating ISO 20022 adoption, which provides more extensible message structures.
Monitor NIST’s additional signature standardization round. Smaller-signature PQC algorithms (SQIsign, HAWK) could significantly reduce the message-format and bandwidth challenges described above. Standardization is expected 2027–2028. Building crypto-agility into new systems ensures these algorithms can be adopted when available.
Appoint executive-level ownership. The G7 framework explicitly calls for this, and it is not ceremonial advice. PQC migration crosses every organizational boundary – security, infrastructure, compliance, vendor management, card operations, settlement. Without executive sponsorship and dedicated program management, it will die in committee. One useful model: treat PQC migration as a multi-year program akin to Y2K or PCI compliance, with its own budget line, dedicated team, and board-level reporting cadence. In workshops with Asian payment industry participants, only 20% reported that their senior stakeholders were “very familiar” with the quantum threat, while roughly 44% admitted their leadership was “not familiar at all.” Closing this awareness gap is a prerequisite for everything else.
Begin re-encrypting long-lived data with quantum-safe protections. As I have argued in my analysis of why PQC alone is not sufficient, the HNDL threat demands immediate action on what some call “Track 0” – protecting data that has already been generated but not yet migrated. This means re-encrypting archived settlement records, compliance data, and long-lived authentication credentials under AES-256 or hybrid PQC wrapping. It means shortening trust lifetimes on certificates to limit the window of retroactive vulnerability. It means deploying quantum-safe VPN tunnels around the most sensitive data flows even before the full migration program reaches those systems. These are defensive measures that can be implemented with today’s technology and that reduce risk immediately.
Participate in industry coordination. The FS-ISAC PQC Working Group, Europol’s Quantum Safe Financial Forum, NIST’s NCCoE migration guidance, and national banking association quantum readiness groups all provide venues for sharing progress, identifying interoperability issues early, and influencing the standards that will govern the migration. Institutions that participate shape the outcome. Those that wait inherit it.
The Question No One Wants to Answer
There is a deeper question lurking behind the technical challenges, one that the industry is only beginning to confront honestly. What if the timeline is wrong – not too aggressive, but too optimistic?
The Global Risk Institute’s 2024 survey of 32 quantum computing experts places the probability of a cryptographically relevant quantum computer at 19–34% within 10 years and 60–82% by 2044. Expert estimates vary from 4 years to 16. Algorithmic advances – particularly Gidney’s 2025 improvements to quantum factoring – have meaningfully reduced theoretical CRQC resource estimates, with one illustrative analysis suggesting the timeline may have shifted approximately seven years closer. If the lower bound is right, a CRQC could arrive while the payments industry is still mid-migration, with billions of cards, millions of terminals, and thousands of HSMs still running quantum-vulnerable cryptography.
This is the scenario that makes “harvest now, decrypt later” more than a theoretical concern. Financial data recorded today – correspondent banking messages, settlement instructions, cardholder authentication flows – could be decrypted retroactively, enabling transaction forgery, identity theft, and the kind of systemic disruption that the Citi Institute modeled at trillions of dollars in losses. The Federal Reserve’s September 2025 paper on HNDL risks was explicit: data privacy risks created by HNDL harvesting cannot be fully mitigated retroactively.
There is a companion threat that receives less attention but may be equally consequential for payments: “trust now, forge later.” Unlike HNDL, which targets confidentiality, TNFL targets integrity. A future quantum adversary would not merely read past encrypted messages – it would forge digital signatures. In payments, where digital signatures authenticate everything from SWIFT messages to software updates to device certificates, the ability to forge a signature is the ability to counterfeit trust itself. A forged signature on a firmware update could compromise an entire fleet of payment terminals. A forged SWIFT message could redirect settlement funds. A forged certificate could create cloned cards that pass offline authentication. Every long-lived digital signature in the financial system becomes a ticking liability unless the signing infrastructure migrates to PQC before a CRQC emerges.
Imagine a scenario in 2033: quantum computing has progressed faster than the consensus expected. A mid-tier nation-state – or a well-resourced criminal organization that has purchased quantum computing access on a nascent black market – begins selectively decrypting SWIFT traffic harvested years earlier. They do not attack directly; that would trigger circuit breakers and incident response. Instead, they use the decrypted data to construct elaborate fraud schemes, armed with perfect knowledge of bank routing codes, correspondent relationships, authorized signing officers, and internal settlement procedures. The frauds are small enough individually to avoid detection, large enough collectively to extract billions. By the time the pattern is identified, the harvested data has enabled attacks across dozens of institutions in multiple jurisdictions.
This is not science fiction. Every element of it – HNDL harvesting, quantum decryption, fraud enabled by insider-level knowledge – is either already happening (the harvesting) or a direct, well-understood consequence of quantum cryptanalysis (the rest). The only uncertainty is the timeline.
And this is why the seemingly dry, technical challenges described in this article – buffer overflows in message parsers, RAM limitations on smart card chips, certification backlogs for HSMs – matter so profoundly. Each unresolved challenge is a reason the migration takes longer. Each additional month of delay is another month of data harvested, another month of signatures that could be forged retroactively, another month in which the industry’s most fundamental security assumptions remain unprotected.
The payments industry has a window. How large that window is depends on quantum computing progress that no one can predict with precision. What can be predicted with precision is the time it will take to migrate – and for a global, interconnected, regulation-bound, hardware-dependent ecosystem like payments, that time is measured in years, not months.
FS-ISAC has a phrase for the current moment: “crypto-procrastination.” It is a gentle term for what is, in the context of trillions of dollars of daily settlement activity, a potentially catastrophic strategic error. The organizations that begin in earnest in 2026 will have options. Those that wait for certainty will find that certainty arrives in the form of a crisis – and by then, the window for an orderly migration will have closed.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.