Post-Quantum

Post-Quantum Negligence: Legal Risks of Failing to Prepare for the Quantum Threat

Legal & scope note. This article presents a risk‑based interpretation of post‑quantum security and potential liability exposure. It synthesizes public guidance (e.g., NIST/NCSC/EU materials) and applies legal concepts such as foreseeability and the Learned Hand formula as analytic tools to frame decision‑making. Those tools are illustrative, not dispositive, and courts may reach different conclusions as facts and standards evolve.

Statements about how regulators or courts “will likely” respond are scenario analysis, not settled doctrine. Quantum timelines are uncertain, reasonable experts disagree on probabilities, and jurisdictional differences matter. Treat the arguments here as one informed perspective intended to help CISOs and boards calibrate prudence – not as predictions of inevitable outcomes.

Nothing in this article constitutes legal advice. Readers should seek independent legal counsel, validate assumptions against current authority in their jurisdiction(s), and weigh alternative expert views before making decisions.


Quantum computing is no longer a far-off hypothesis – it’s a rapidly emerging reality that could render today’s encryption obsolete. For CISOs and their boards, this means a new kind of cybersecurity crisis is on the horizon. Sensitive data that is safely encrypted now may be sitting like a ticking time bomb, waiting to be cracked by tomorrow’s quantum machines. The message is clear: security leaders must start preparing now – or face severe legal and fiduciary consequences when “Q-day” finally hits. The goal here is not to sell fear but to translate public signals – from standards bodies and regulators – into a practical risk lens for CISOs and boards.

The Quantum Threat and “Harvest Now, Decrypt Later” (HNDL)

Quantum computing poses an unprecedented security threat: once sufficiently powerful quantum computers arrive, they will break the encryption protecting virtually all digital data today. Adversaries are not waiting for that day – they are stealing encrypted data now with plans to decrypt it later when quantum capabilities mature, the so-called “harvest now, decrypt later” (HNDL) strategy. In other words, a breach may effectively be occurring today (data exfiltration), even if the actual harm (decryption of sensitive information) will only manifest years from now when quantum decryption becomes feasible.

This time-lag creates a serious challenge for organizations. Data with a long shelf life – for example, personal records, trade secrets, intellectual property, financial and healthcare data that must remain confidential for years – is at high risk. Attackers can harvest such encrypted troves in 2025 and hold onto them until, say, 2030+ when a cryptographically relevant quantum computer (CRQC) exists. At that point, previously secure data could suddenly be exposed. Forward secrecy evaporates: what’s protected today may be plaintext tomorrow. Importantly, this threat is no longer science fiction. Expert consensus now quantifies a non-trivial probability that quantum decryption capabilities will emerge within the next decade. My own prediction puts it around 2030.

For a CISO, this means the clock is ticking. We don’t know exactly when “Q-day” – the day quantum computers break encryption – will hit. Unlike Y2K, there is no fixed deadline or advance warning when everything fails. But we do know it’s coming with enough certainty that failing to prepare is courting disaster. As Ollie Whitehouse of the UK NCSC put it, as quantum tech advances, upgrading to quantum-resistant security “is not just important – it’s essential” to ensure today’s secrets remain secure in the future.

Foreseeability and Timing: When Does the Duty to Act Begin?

This HNDL scenario creates a unique legal timing problem. In tort and negligence law, foreseeability of harm is central to determining when a duty of care arises. Here, the harmful act (data theft) is happening now, but the harmful consequence (decryption and exposure of data) may not occur for years. So when does the duty to protect attach? Can an organization be negligent today for failing to defend against an exploit that might only fully hit in 5-10 years?

Increasingly, the answer appears to be yes – the risk is foreseeable now, therefore the duty exists now. We have entered a period where the quantum threat is well-documented by experts and governments, and viable defenses exist. It is no longer speculative; it’s a known emerging risk with quantifiable probabilities. Courts and regulators will likely view quantum decryption of harvested data as legally foreseeable in the mid-2020s, given the public evidence:

  • Established Warnings: National security agencies have warned of HNDL for years. The U.S. NSA, for example, publicly flagged “harvest now, decrypt later” tactics as an active concern – sensitive data stolen today “may be stored…with the intention of being decrypted years from now when quantum capabilities mature,” as a recent NSA-affiliated op-ed noted. Likewise, NSA and GCHQ have directly warned that nation-state adversaries are likely already collecting encrypted communications now in anticipation of future quantum decryption.
  • Available Standards & Guidance: The U.S. National Institute of Standards and Technology (NIST) has published post-quantum cryptography (PQC) standards as of August 2024, after seven years of review. These include vetted quantum-resistant algorithms ready for adoption. The existence of vetted solutions undermines any claim that mitigation is “too premature.” In other words, tools to fix the problem are on the shelf today.
  • Industry and Expert Consensus: Expert surveys put measurable odds on quantum attacks within business-relevant timeframes. It’s no longer if but when – and “when” could well be within the useful life of data and systems currently in use. Think of it this way: if your organization holds data that needs to remain secure until 2035, and experts say there’s a significant chance of quantum breaks by 2030-2035, the risk falls squarely inside your planning horizon.
  • Active Attacks Underway: It’s documented that adversaries are already conducting HNDL breaches. State-sponsored threat actors and cybercriminals alike are stockpiling encrypted data. In effect, the breach is happening in slow motion – a reasonable security professional in 2025 knows that data being stolen today will be compromised later if nothing is done. And even compliance-driven data retention (e.g. archives kept for years due to regulations) can unintentionally expand the attack surface for future quantum-enabled breaches.

Given these factors, a strong argument emerges that the standard of care for cybersecurity in 2025 includes preparing for the quantum threat. What was once a distant hypothetical is now a foreseeable risk with available mitigations. As one legal analysis notes, entities that handle sensitive data and fail to address known quantum risks could face negligence claims or regulatory penalties – particularly in jurisdictions that mandate “reasonable safeguards against foreseeable risks”.

In short, inaction during this harvest phase may breach the duty of care, even though the “explosion” of the time-bomb hasn’t gone off yet. The law doesn’t require clairvoyance, but it does require reasonable foresight. And by 2025, the quantum threat is plainly foreseeable.

The Learned Hand Test – Quantifying (In)Action

Some organizations ask: Is it really negligence to not invest in post-quantum migration now? To answer that, consider the classic Learned Hand formula for negligence (from U.S. v. Carroll Towing Co.). The formula posits that a party is negligent if $$(B < P \cdot L)$$, where:

  • B is the burden (cost) of taking precautions,
  • P is the probability of harm,
  • L is the severity of the expected loss.

We can actually plug in rough numbers for the quantum risk scenario:

  • B (Burden): The cost of beginning a PQC migration and crypto-agility program – say on the order of a few million dollars for a mid-to-large enterprise (for example, $2M-$5M, depending on complexity and infrastructure). This includes expenses like upgrading software libraries, replacing algorithms, hiring crypto experts, and implementing “crypto-agile” systems that can swap encryption easily.
  • P (Probability): The likelihood that a CRQC emerges and that your stolen encrypted data gets decrypted within a relevant timeframe (e.g. the next 5-10 years). Expert estimates increasingly put this in the 33%+ within a decade. A 33% chance of catastrophe is enough to keep any risk manager up at night.
  • L (Loss): The impact if it happens – effectively a data breach of everything you thought was securely encrypted. This could mean regulatory fines, breach notification costs, lawsuits, lost intellectual property, reputational damage, business disruption, etc. For a large financial or healthcare institution, a major breach’s cost can easily be in the hundreds of millions of dollars. (Consider that Equifax’s 2017 breach ended up costing around $700 million in settlements and fines, or that GDPR fines can be 4% of global turnover.)

Even using conservative figures, P · L (the expected loss) could be in the tens of millions (for example, 0.1 × $100M = $10M). If the B (precaution cost) is only a few million, then indeed B < P·L, implying it is economically (and arguably legally) unreasonable not to take the precaution. This equation will vary by organization and risk tolerance – but the key point is we can no longer say the quantum threat is “incalculable” or that defenses are “prohibitively expensive.” We can actually perform a risk assessment with real numbers, meaning a future court or regulator could do the same. The fact that NIST, global experts, and government agencies have published concrete timelines and standards makes the risk quantifiable and documented.

Any organization that ignores those numbers might be judged as having made an irrational choice by not investing in mitigation.

“Reasonable Person” Standard in 2025

Negligence is ultimately judged against what a reasonable professional would do under the circumstances. Fast-forward to 2032 or 2035, in a courtroom dissecting a massive data exposure from quantum decryption of data stolen years earlier: the question will be, “What did a reasonable organization know in 2024-2025, and what actions would a reasonable CISO or company have taken?” By today, any reasonable CISO is aware that:

  • NIST PQC standards exist (i.e. the tools to migrate were available).
  • Allied governments sounded the alarm – for example, the EU and UK have issued roadmaps and directives (more on that below), and U.S. federal agencies were explicitly mandated to act (even if the private sector wasn’t yet).
  • Expert publications quantified the risk – making it a known, documented threat rather than a sci-fi scenario.
  • Harvesting attacks were occurring now – meaning the breach was already in progress, just awaiting future decryption.

In light of these facts, by 2024-2025 it appears increasingly unreasonable for an organization handling long-lived sensitive data to do nothing about quantum risks. As one commentary on emerging tech law put it, businesses that “fail to address quantum threats could result in legal exposure,” with potential negligence or misrepresentation claims especially where reasonable safeguards for known risks are expected.

Put simply, the reasonable CISO in 2025 is expected to at least be doing something: assessing the threat, making a plan, starting initial steps. Those that stick their head in the sand risk being found negligent for failing to mitigate a known, highly dangerous vulnerability.

An analogy can be drawn to widely publicized vulnerabilities in traditional IT. If in 2025 you somehow ignored a critical software security alert (say, a major zero-day exploit with available patches), and years later it leads to a breach, you’d have a hard time arguing that was “reasonable.” The same logic is now applying to quantum: the warnings and fixes are out there in plain view.

Regulatory Pressure: EU vs. US Approaches

Different jurisdictions are addressing the post-quantum preparedness gap in markedly different ways. Broadly, civil-law regimes (like the EU) are codifying duties upfront, while common-law regimes (like the US) are slower, potentially waiting for either voluntary adoption or litigation after the fact. This divergence is important for CISOs operating globally, as it influences how soon you must act and what “reasonable” security looks like in each context.

Europe (EU) – Proactive Mandates to Act Now

In the EU, regulators have effectively decided that the duty to prepare during the harvest-now stage is explicit and enforceable. Two major pieces of EU law illustrate this proactive approach:

  • DORA (Digital Operational Resilience Act): This EU regulation for financial services (effective Jan 2025) requires firms to ensure resilience against ICT risks, including emerging threats. Guidance around DORA has made clear that migration to post-quantum cryptography is crucial to comply with the requirement of robustness against new technological threats. In other words, EU lawmakers have implicitly recognized quantum as a foreseeable threat that banks, payment providers, insurance firms, and others must plan for as part of their operational resilience. In fact, upgrading to PQC is described as “essential” for compliance and to ensure market stability. Non-compliance can lead to supervisory action and fines.
  • NIS2 (Second Network and Information Security Directive): This directive (which EU Member States must transpose into national laws by late 2024) imposes stricter cybersecurity obligations on a wide range of “essential” and “important” entities (from energy and transportation to healthcare, banking, infrastructure, etc.). While NIS2 doesn’t explicitly name “quantum,” it requires using state-of-the-art cybersecurity and risk management practices. EU implementing guidance has explicitly included quantum-resistant encryption planning as part of cyber resilience, especially for critical sectors like finance and energy. In practice, European banks and critical operators are being told to incorporate PQC into their strategies now.

Additionally, the EU has published a coordinated PQC transition roadmap for member states. It urges starting national transition efforts by 2026, having quantum-resistant protections for high-risk systems (e.g. critical financial infrastructure) by 2030, and completing as much as possible by 2035. This timeline aligns with the commonly cited Mosca’s Theorem (X + Y ≤ Z): if your data’s shelf life (X) plus your migration time (Y) extend beyond the expected quantum break date (Z), you’re already too late. Europe is essentially legislating that organizations don’t fall into that danger zone.

The bottom line in the EU: If you fail to plan and implement quantum-safe measures, you’re not just risking a future breach – you’re breaking the law today. Regulators (and by extension, courts and insurers) will treat non-compliance as negligence per se. An EU company that ignores PQC could face enforcement even before a breach occurs (for example, during an IT audit under DORA’s resilience testing), and certainly will face harsh penalties if an incident happens and quantum preparedness was found lacking. It’s a clear legal incentive to be proactive.

For CISOs in Europe, quantum risk isn’t just a theoretical worry – it’s now a compliance item on the checklist.

United Kingdom – Guidance Now, Regulation Later

The UK, though a common-law system, has been somewhat proactive via guidance. The UK’s National Cyber Security Centre (NCSC) rolled out a detailed PQC migration roadmap in early 2025, setting a goal for all critical systems to be quantum-resistant by 2035. The NCSC guidance breaks the effort into phases: for instance, by 2028 organizations should identify cryptographic assets needing upgrades and build a migration plan; by 2031, complete high-priority crypto upgrades; by 2035, finish migrating all systems to PQC. It strongly “encourages organizations to begin preparing now” so that the transition can be orderly and not rushed at the last minute. As Ollie Whitehouse (NCSC’s CTO) said in launching the roadmap, quantum will revolutionize tech but “poses significant risks to current encryption… [this] roadmap… helps ensure today’s confidential information remains secure in years to come”.

However, the UK government has explicitly decided not to impose binding regulations – yet. In an October 2024 policy response, the government stated “we agree that it is too early to establish regulatory requirements and legislation for quantum technologies at this stage” due to the nascency of the tech, even though “sustained action is required now”. Instead, the UK is focusing on raising awareness, developing standards, and letting industry lead on quantum readiness for now. The stance is essentially: prepare voluntarily now, and formal regulations will come once the technology matures a bit more.

For UK CISOs, this means you should act (per NCSC’s advice), but you won’t get fined by a regulator in 2025 purely for lacking a quantum migration plan. That said, the writing is on the wall: if you operate in critical sectors, regulators like the Bank of England, FCA, or ICO are undoubtedly expecting you to be considering quantum risks as part of good practice and operational resilience. And if a breach were to occur in a few years, even absent a specific PQC law, UK courts could still judge your inaction against the standard of “reasonable cybersecurity” informed by NCSC’s public guidance. Remember, the UK Data Protection Act already requires appropriate security measures; what’s “appropriate” evolves over time. By 2030, if quantum-safe encryption is widely available and recommended, failing to use it could be seen as de facto negligence or even a regulatory violation of the duty to secure personal data.

In summary, the UK approach is “prepare the industry and gently push,” with the expectation that by the time quantum threats fully materialize, regulations will be in place. Don’t be lulled by the lack of immediate laws – savvy organizations are already aligning with the 2035 roadmap to avoid a scramble when requirements eventually hit. As the NCSC itself has emphasized, starting the work now will “reduce the risk of rushed implementations and related security gaps” down the line.

United States – Unclear Path: Few Rules Now, Big Liability Later?

In the U.S., there is currently no comprehensive mandate for the private sector to address post-quantum risk, even though the federal government itself has taken it very seriously for its own systems. On the federal side, initiatives include:

  • NIST PQC Standards (2024): NIST finalized the first set of quantum-resistant cryptographic algorithms in August 2024. These are publicly available, and NIST (along with bodies like the Cybersecurity & Infrastructure Security Agency, CISA) is urging industry to test and implement them as soon as practicable.
  • Federal Agency Requirements: A 2022 Biden administration directive (NSM-10) mandates all U.S. National Security Systems (military, intelligence, etc.) migrate to quantum-resistant crypto by 2030. Similarly, an OMB memo (M-23-02, Nov 2022) requires all federal civilian agencies to inventory their cryptographic systems, assess quantum vulnerabilities, and develop transition plans. The NSA’s updated CNSA 2.0 suite (for classified systems) sets deadlines too – PQC should be “preferred” by 2025 and mandatory for many applications by 2030-2033, with an absolute latest deadline of 2035. In short, the U.S. government’s house is being put in order, with concrete timelines for migration.
  • National Cyber Strategy (2023) and other policy documents have flagged quantum-threat mitigation as a priority, but so far the approach for the private sector is mostly carrots, not sticks. Agencies like CISA have published roadmaps, guidelines, and risk info for companies to follow (e.g. urging them to inventory crypto systems and develop “crypto agility”), but there is no direct federal regulation forcing private companies to act on quantum in 2025.

Thus, U.S. private companies find themselves in a gray area: the standards and knowledge exist (so you can’t claim ignorance), but there’s no specific law saying “thou shalt implement PQC by X date.” However, this doesn’t mean no legal risk – it just means the risk will likely come in other forms:

Regulatory Enforcement Using Existing Laws

Regulators can use broad statutes and rules to go after egregious security lapses. For example, the Federal Trade Commission has authority to prosecute companies for failing to provide reasonable data security (under Section 5 of the FTC Act’s prohibition on unfair practices). If a quantum-induced breach occurs in a few years, the FTC could argue that neglecting known encryption weaknesses was an unfair practice. Notably, the FTC has already indicated that failure to address known vulnerabilities is actionable. In early 2022 they warned companies to patch the Log4j software flaw or face legal action, explicitly stating that “the duty to take reasonable steps to mitigate known software vulnerabilities” is part of existing law. In that notice, the FTC drew parallels to the Equifax breach (where failing to patch a known bug led to 147 million records leaked, costing Equifax $700M). The FTC stated it “intends to use its full legal authority” against companies that don’t fix such known risks and will apply that same authority to “similar known vulnerabilities in the future.

It’s not hard to see the analogy to quantum: once the threat and solutions are widely recognized (which arguably they are now), not upgrading your cryptography in time could be deemed an “unreasonable” security practice under general consumer protection or data protection laws. In effect, agencies like the FTC or state attorneys general could enforce quantum-readiness after the fact by treating an avoidable quantum-broken breach as a form of negligence or unfair practice.

Litigation and Case Law

The absence of a specific regulation doesn’t shield companies from lawsuits. We may see the first major “post-quantum negligence” lawsuits in the U.S. by the early 2030s, after the first big incident of data being decrypted. Plaintiffs (whether consumers, investors, or business partners) will argue that the breach was foreseeable and that the company breached its duty by continuing to use obsolete encryption. They might claim negligence, breach of contract (if contracts promised certain security levels), or misrepresentation/fraud if the company assured customers “your data is safe with us” despite knowing about the quantum issue. A Legal500 analysis notes such legal claims are on the horizon, including negligence or contract breaches for failing to implement adequate quantum-safe protections, especially once industry standards make those protections reasonable to expect.

In corporate contexts, shareholders might sue directors for mismanagement if a known risk like this was ignored (this parallels some climate-change litigation, where investors claim the board failed to address foreseeable long-term risks).

Future Regulations

There is also a chance that the U.S. will enact proactive rules before crises occur, at least in critical sectors. For instance, banking regulators or sector-specific agencies could update their security guidelines to explicitly require crypto-agility and quantum-resistant planning.

The Securities & Exchange Commission’s 2023 cyber risk disclosure rules already force public companies to report on how they govern and mitigate material cyber risks – a savvy CISO would include quantum in that risk assessment. If a company admits in filings that quantum is a material risk but does nothing, that could fuel shareholder suits or SEC enforcement later.

And states might not wait either: for example, New York’s Department of Financial Services (NYDFS) has cybersecurity regulations (NYCRR Part 500) which, as of proposed amendments in 2022, would require a CEO and CISO to annually certify that the company meets its cyber compliance obligations.

If quantum readiness becomes a baseline expectation (due to federal guidance or peer practices), an executive signing such a certification without addressing PQC could face consequences. In other words, lying or being grossly negligent in those attestations could result in penalties for the individuals and the company.


In summary for the U.S.: We have a curious gap – companies won’t get fined in 2025 simply for lacking a quantum transition plan (unlike in the EU). But the exposure is merely delayed, not diminished. When HNDL exploits start causing real damage, U.S. entities will likely face a wave of legal fallout under existing negligence, contract, and consumer protection laws. And U.S. courts, armed with hindsight, often point to what “everyone knew” at the time. By then, pointing to the EU and U.S. government standards as known benchmarks, they may conclude that not preparing was a breach of duty. The prudent move for U.S. companies is to treat quantum risk as part of today’s definition of “reasonable cybersecurity,” even without a regulator explicitly telling you to – because chances are, they will punish it after the fact if you guess wrong.

Who Would Be Liable When HNDL Gets Exploited?

One pressing question for CISOs is: if encrypted data is stolen today and only decrypted years later, who will be held responsible for that eventual breach? The potential liability can span multiple parties:

The Company (Data Owner)

The primary liability will almost always fall on the organization that originally owned or was entrusted with the data. If a corporation fails to migrate to PQC in a timely manner and in (say) 2030 an attacker decrypts years-old customer data, the corporation will face the same consequences as any data breach – only with arguably less excuse because the vulnerability was known in advance.

This includes regulatory fines (under laws like GDPR in the EU, or state data breach statutes in the US), class-action lawsuits from affected individuals, and contractual claims from business partners (e.g. if a bank promised in a contract to use “appropriate security” for a client’s data).

From a legal standpoint, the “breach” could be considered to have occurred when the data was stolen in encrypted form (if courts view encryption as the only thing delaying harm), or at least by the time it was decrypted. Either way, the failure to have upgraded encryption by the time it was needed would be squarely on the company’s shoulders. Especially in the EU, a company could be found in violation of specific regs like DORA/NIS2 and general data protection law (GDPR’s requirement to prevent unauthorized access) for not preventing the exposure.

In the US, regulators like the FTC could prosecute the company for unfair security practices, just as they do in conventional breaches.

Corporate Officers and Directors

A growing trend globally is to hold individual executives accountable for cybersecurity failures. Historically, companies absorb the penalties, but recent cases show that CISOs, CEOs, and even board members might face personal liability in extreme cases.

In the US, the watershed moment was the conviction of former Uber CSO Joe Sullivan in 2022 – he was found guilty of obstruction and misprision for concealing a breach (essentially covering up a 2016 hack) and was sentenced in 2023. It was the first instance of a senior security officer criminally convicted over a cyber incident. The judge explicitly noted that Sullivan got a very lenient sentence (probation) due to unique factors, but warned that future defendants in similar cases would likely face jail time. That case revolved around breach handling, not lack of PQC, but it sent a chilling message to CISOs that negligence or malfeasance in cybersecurity can hit them personally.

More directly relevant to negligence, regulators have started naming individuals in enforcement actions. The US FTC in 2022 took the unprecedented step of personally binding a CEO in a data security order: in the case of Drizly, a beer delivery company, the FTC’s consent order not only sanctioned the company for a data breach, but also required the CEO to implement an information security program at any future company he leads. This was explicitly because of the CEO’s role in the lax practices.

In Europe, laws exist to pursue individuals too. The UK’s Data Protection Act, for example, allows personal action against directors if a company’s breach occurred with their “consent or connivance or neglect.” The UK ICO has used its powers to personally fine directors of insolvent companies that tried to dodge fines for spam and privacy abuses. And financial regulators (e.g. under the UK FCA’s Senior Managers regime) can hold a senior manager accountable if they failed to take reasonable steps to prevent a breach in their area.

All this means that if a CISO or executive knowingly ignored the quantum risk – especially after peers, regulators, and internal reports flagged it – they could be in the firing line. Perhaps a board member could face shareholder lawsuits for failing in fiduciary duty to oversee cyber risks. A CISO could be terminated for cause and even sued by the company or shareholders for gross negligence if the post-quantum breach causes massive losses.

While it’s still somewhat speculative how a “post-quantum breach” case against an individual would unfold, the trend toward personal accountability in cybersecurity suggests that leaders cannot assume they’ll be immune. Demonstrating that you took prudent steps (e.g. raised the issue, started a plan, requested budget) will be vital as a defense. Otherwise, a future court might say you failed the “reasonable CISO” standard and hold you personally liable.

Vendors and Service Providers

Another question is whether third-party technology providers could share blame. For instance, if a company relied on a cloud service or encryption software from a vendor, and that vendor did not offer quantum-safe options or migrate in time, could the vendor be liable for downstream losses?

In practice, vendors will also face scrutiny. Many vendor contracts have clauses requiring “industry standard” security or compliance with applicable laws. If by 2030 the industry standard is PQC and the vendor is still using RSA, that could be a breach of contract or warranty. Similarly, if a security product was marketed as providing strong encryption, there might be product liability or misrepresentation claims if it became unfit for purpose (once “strong” encryption became crackable).

That said, vendors are likely to protect themselves with limitation-of-liability clauses, and they will argue that as of the time of service, the encryption was considered strong. We might see complex litigation where the breached company blames the vendor, and vice versa, with each arguing the other had the responsibility to upgrade. For example, a cloud provider holding encrypted customer data gets hit by HNDL decryption: under GDPR, the data controller (the company) is primarily liable to individuals, but it can then pursue the data processor (cloud vendor) if the vendor violated its obligations. If the vendor contract said it would implement appropriate security, failing to roll out PQC when others did might be seen as a violation.

So, smart vendors are already working to offer quantum-resistant options (indeed, major cloud providers and VPN services have begun introducing PQC testbed offerings). In the end, vendors aren’t “safe” either – if they provide cryptographic functionality, they could face lawsuits from their clients or even regulatory action (some regulators can directly oversee certain tech service providers in finance and critical sectors).

Professional Advisors (Lawyers, Auditors, Consultants)

A more peripheral angle is whether professionals who should have warned or guided the company could be liable. For example, lawyers handling sensitive long-term data (like estate plans, or healthcare records) have ethical duties to safeguard client information. The American Bar Association’s rules state that competent representation requires staying informed of relevant technology risks. A lawyer or cybersecurity consultant who never raised the quantum risk to a client (especially a client in a regulated industry with long-lived data) might face a malpractice claim if that client later suffers a quantum-related breach. Essentially, “you’re our advisor – you should have told us to start the transition early!” is a conceivable allegation. While no such cases exist yet, it’s plausible in fields like finance or healthcare that external auditors or advisors could be questioned: did you alert your client about this foreseeable risk and the available standards (NIST algorithms, etc.)? If not, the client (or its shareholders) might try to spread the blame to those advisors for failing to provide proper counsel. This remains speculative, but it underscores how broadly the duty of tech awareness is spreading.


In short, when encrypted data stolen today is decrypted in 2030, the fallout won’t be shrugged off as “bad luck, hackers are clever.” It will be seen as a failure of duty by multiple parties. The primary burden falls on the company holding the data, but its leaders may be individually accountable, and its partners or vendors could be pulled in through contracts and negligence claims. Each will point fingers about who should have upgraded the crypto. This is why a forward-looking CISO should document their efforts now – doing nothing not only increases the chance of the breach, but also leaves you with no defensible story about how you exercised due care.

Lessons from Other “Delayed Impact” Scenarios

This quantum predicament – a delay between lack of action and full consequences – is unusual but not entirely unprecedented. CISOs can draw analogies from other cases where failing to act on known risks led to liability only later:

Unpatched Vulnerabilities

Perhaps the closest parallel in cybersecurity is failing to patch a known critical software vulnerability. For example, the 2017 Equifax breach resulted from not patching a disclosed Apache Struts flaw; Equifax paid at least $700 million in penalties and settlements. Regulators and courts were unsympathetic to arguments like “attackers hadn’t used it when we ignored it” – once the harm happened, the prior inaction was squarely deemed negligent.

We see this pattern repeatedly (e.g. with bugs like Heartbleed in 2014, or Log4j in 2021): a security bulletin comes out warning of a severe flaw, some organizations delay remedial action, then a breach occurs exploiting that exact hole.

The lesson: If the vulnerability was public and a fix was available, inaction is negligence. Post-quantum risk is analogous but on a longer timeline – the “patch” (PQC algorithms and crypto-agile frameworks) exists now, the “exploit code” (a powerful quantum computer) might not fully exist for years, but you’re expected to patch before it hits.

In fact, U.S. regulators made this connection explicit: the FTC’s Log4j warning in 2022 stated that failing to remediate known vulns can violate law even if an attack hasn’t happened yet, and that the FTC will apply this principle to future known threats. A quantum-enabled breach in 2030 will simply be a high-profile instance of “you knew this algorithm would be broken, and you didn’t replace it in time.

Y2K and Systemic Time Bombs

The Y2K (Year 2000) problem is often cited as a historical parallel. In the late 1990s, organizations had to update software before the calendar hit Jan 1, 2000 or risk widespread system failures. Those who procrastinated came dangerously close to disaster. While Y2K fortunately saw few outages (due to massive remediation efforts), it set a precedent that widely foreseen technical deadlines demand proactive action.

Post-quantum migration is sometimes called “the new Y2K” in terms of scale – except we don’t know the exact deadline. Still, boards and regulators likely remember Y2K and will ask CISOs, “We managed that transition because we started early; why didn’t you manage this one when you had a similar heads-up?

The takeaway: absence of a set date is not a license to wait indefinitely – it’s actually an argument to start sooner, because the deadline will only be obvious in hindsight (when it’s passed).

Climate Change and Long-Tail Risks

Outside of tech, there’s an analogy in climate risk and corporate liability. Companies today are being sued for not preparing for foreseeable climate impacts (like floods, fires) or for contributing to future harms. Courts are grappling with the idea that harm manifesting decades later can be traced to failures now.

In some jurisdictions, not building resilient infrastructure (despite clear warnings of rising sea levels, for example) can lead to negligence claims when damage eventually occurs. The principle is the same: foreseeability + failure to act = liability, even with a long delay.

For CISOs, the “climate change” equivalent is quantum computing – a force of nature that will arrive, with only the timeline in question. Just as boards now must consider long-term environmental risks, they must consider long-term data risks.

Latency in Data Breach Discovery

Another parallel: many data breaches themselves aren’t discovered until long after the initial intrusion. Companies have been fined under GDPR for breaches that began years earlier but were only revealed later. Regulators didn’t excuse the old security lapses just because time had passed – they judged the security measures in place at the time of intrusion by the standards of that time.

Similarly, if your data is stolen in 2025 and decrypted in 2032, a future authority will judge your 2025 encryption against 2025 best practices. If by 2025 those best practices (for your industry) included starting a quantum-safe upgrade, failing to do so will be seen in retrospect as a lapse.


In all these cases, the pattern is clear: where there is a known delay between cause and effect, the wise course is to act before the effect materializes. Those who didn’t (Equifax with patches, or hypothetically any firm ignoring Y2K) are later deemed negligent when the predictable disaster strikes. Post-quantum risk sits in that same category of “predictable disaster if not mitigated.”

Rising Personal Liability for Security Leaders

A special note for CISOs and other security executives: the era of personal accountability in cybersecurity is dawning. We touched on this above, but it merits emphasis, because preparing for quantum isn’t just about corporate risk – it could protect your own career and even personal liability exposure.

Regulators and prosecutors have shown increasing willingness to single out individuals for cybersecurity failures:

  • The Uber Case (U.S.): Joe Sullivan’s conviction showed that a CISO can face criminal charges for mishandling a breach. While his actions (active concealment and paying off hackers) were an extreme example, the fact is a jury and judge were willing to make an example of an executive. The presiding judge explicitly noted he gave Sullivan a break, but warned “I am convinced that future cases won’t end with no jail time” for executives who engage in wrongdoing. The takeaway: if a cyber incident is mishandled due to negligence or unethical behavior, claiming “I was just the CISO” won’t save you. Even outside criminal scenarios, civil liability is possible for officers (e.g. investor lawsuits), and at the very least, career consequences are guaranteed.
  • The Vastaamo Case (Finland): In Europe, a notable case involved a psychotherapy clinic’s CEO (Ville Tapio) who was criminally convicted for negligence after a 2018 breach where patient therapy notes were stolen and leaked. He had failed to fix known security issues and kept the board in the dark. In 2024, a Finnish court gave him a jail sentence for data protection failings – one of the first instances of an executive criminally punished for negligence in cybersecurity. This shows that even without specific “cyber laws,” traditional laws can prosecute executives for gross negligence when it leads to harm (especially harm as sensitive as leaking patient records).
  • Regulatory Orders Naming Individuals: As mentioned, the FTC’s Drizly case made news by binding the CEO to security obligations. An FTC commissioner noted that holding leaders accountable was a strategy to “further deter negligent security practices.” Similarly, NYDFS’s proposed rules would require a CEO and CISO to annually certify their cybersecurity compliance – lying on such a certification or being willfully ignorant could open them up to enforcement actions or even charges (akin to how Sarbanes-Oxley holds CEOs/CFOs accountable for false financial attestations).
  • Laws Allowing Personal Fines: The UK, as noted, allows the ICO to pursue directors in some cases. Under GDPR, while fines generally apply to companies, there’s nothing preventing other consequences for responsible persons if national laws provide for it. In general, European regulators are signaling that egregious security failings could implicate management. Germany’s corporate governance code, for instance, implies that board members must account for “organization-wide risks” (quantum arguably being one in coming years).

For a CISO, this means your best protection is to show you exercised due diligence and took “reasonable steps.” If quantum risk later blows up and you did nothing, you risk being painted as the one who “dropped the ball.”

Conversely, if you can show you repeatedly raised the issue to the board, requested resources, initiated a transition plan, and perhaps were stymied by budget or other constraints outside your control, you’re in a far better position.

Documentation is key: keep records of risk assessments that include quantum; if the decision was made to accept the risk for now, document who decided and the rationale (e.g. “defer PQC upgrade of System X until 2026 due to pending vendor support, will revisit next quarterly review”). Essentially, don’t be the CISO who is caught saying in 2030, “Yes, I heard about that threat in 2024 but we never acted.” That is a recipe for career suicide and possible liability. As cybersecurity attorney Richard Borden observed, regulators “are pushing the regulatory risk higher… increasing fines and looking to have responsibility pushed up to senior officers and to the board”. That means a CISO should push the awareness upwards as well – make sure the board and C-suite can’t say they didn’t know.

It’s also wise to educate senior management and the board now about quantum risks in business terms (risk vs. cost, migration timeline, etc.). This not only garners support and resources, but also creates shared responsibility at the highest level. Given that board members themselves could face questions (“did you oversee emerging cyber risks like this?”), many boards will be receptive if you bring it up constructively. In fact, not informing the board could itself be seen as a lapse by the CISO. So, in upcoming strategy meetings, discuss your organization’s plan (or need for a plan) for post-quantum cryptography migration. Being a year or two ahead of the regulatory curve could save everyone – including you – from nasty surprises later.

Conclusion: Reasonable Care in the Post-Quantum Era

For CISOs and organizational leaders, the advent of quantum computing is not just a technological paradigm shift – it’s a looming legal and fiduciary crisis if not managed properly. We are at a pivotal moment where the threat is known and growing, solutions exist, and major governments and standards bodies have rung the alarm bell. The concept of “post-quantum negligence” is emerging to describe the failure to act now to prevent a literally foreseeable future breach. Both the United States and European Union have signaled that “reasonable” cybersecurity today includes preparing for tomorrow’s quantum reality, through either formal regulation (EU’s hard mandates) or clear expert guidance (US/UK’s soft mandates).

The legal risk of doing nothing is simply too great to ignore:

  • In the EU, regulations like DORA and NIS2 already mandate action – non-compliance could lead to enforcement long before any quantum attacker shows up. By encoding quantum readiness into law, Europe is effectively saying the duty is here and now.
  • In the US, even absent explicit regs, companies that bury their heads in the sand may find themselves on the losing end of negligence lawsuits or punitive regulatory actions once data gets cracked. The first movers – be they regulators or plaintiffs’ lawyers – will point out that the industry knew this was coming and that failing to replace known-broken encryption is as negligent as failing to lock your doors at night.
  • The long gap between cause and effect does not exonerate – it simply means evidence of foresight (or lack thereof) will be dissected after the fact. Just as Equifax was lambasted for ignoring a critical patch, future companies will be lambasted for ignoring PQC migration. The difference is, in the quantum scenario, companies have even more advance notice.
  • CISOs and executives are personally in the crosshairs if they are found to have been willfully ignorant or negligent about this risk. Regulators are increasingly willing to name and shame, or prosecute, individuals for egregious security lapses.

So, what to do? Fortunately, liability here is avoidable – by taking reasonable, proactive steps that are well within reach for most organizations, you can vastly reduce the future blame (and harm). Some key actions include:

  1. Assess and Plan: Conduct a cryptographic risk audit now. Identify what data would be high impact if decrypted in 5-10+ years (e.g. personal identifiable info, intellectual property, national security data, long-term secrets). Inventory your encryption usage – which systems and applications rely on RSA, ECC, or other vulnerable algorithms? Determine the “shelf life” of your data and systems (how long they need to remain secure vs. how long an upgrade would take). This aligns with regulators’ expectations – even the U.S. government required its agencies to do a crypto inventory and quantum risk assessment by 2023. A private company doing the same in 2025 is simply following best practice.
  2. Begin Migration & Crypto-Agility Initiatives: You don’t necessarily rip-and-replace overnight, but you should be testing NIST’s PQC algorithms and building crypto-agility into your systems. Crypto-agility (the ability to swap out cryptographic algorithms easily) is now considered a security imperative. Aim to adopt a “dual-stack” or hybrid approach where possible – for instance, using classical and post-quantum algorithms in parallel (as NIST recommends for a transition period) so that if one is broken, the other still protects the data. If you have software or hardware vendors, engage them about their PQC roadmap – make it a factor in procurement. The goal is to start the migration such that you’re mostly done by the time a quantum computer arrives, rather than starting when it arrives (which would be too late). Remember, EU guidance wants critical systems on PQC by 2030 – that’s essentially tomorrow in IT planning terms. Taking action now also shows regulators and courts (should it ever come to that) that you were on top of the issue.
  3. Stay Informed and Document It: Track developments (e.g. new NIST standards, industry consortiums, government roadmaps) and integrate them into your risk management. Subscribe to alerts or working groups on post-quantum crypto. Each year, update your assessment: did the risk probability change? Did new solutions emerge? Crucially, document these discussions and decisions. If you decide the risk is low enough to delay active migration, explicitly record that decision and set a timeline to revisit it. If you ask for budget to address quantum and it’s denied, log that you raised it. This paper trail could be your saving grace in court or in front of regulators to prove you weren’t negligent – you made conscious, informed decisions and adjustments as information evolved.
  4. Align with Regulatory Guidance: Even if you’re a U.S. private company not (yet) subject to a quantum law, pay attention to what the EU, UK, and sectors like finance are prescribing. They are essentially providing a benchmark for “reasonable measures.” For instance, if you operate globally, complying with DORA/NIS2 for your EU operations might de facto set your standard across the board – it would be hard to justify different levels of crypto protection in different regions. If you’re in critical infrastructure, note that allies (EU, NATO countries, UK) consider quantum transition part of resilience – so U.S. regulators and industry standards (like NIST’s cybersecurity framework) will likely incorporate that soon. Proactively meeting those guidelines now positions you ahead of the curve. Many organizations are already voluntarily adopting NIST’s PQC and zero-trust encryption models in anticipation of future requirements.
  5. Communicate & Educate: Make sure your Board and C-suite understand the quantum issue in business terms. This is not about scaring them with sci-fi, but about framing it as risk management. Explain the “harvest now, decrypt later” concept and why certain data might be at risk within the company’s strategic timeline. Present that Learned Hand-style cost-benefit: “Here’s what it would roughly cost us to start upgrading vs. the plausible cost of doing nothing.” Often, when non-technical execs see that the order of magnitude of cost (say a few million) is dwarfed by the potential losses (tens of millions), it clicks that this is a worthwhile insurance policy. Also mention the regulatory landscape – e.g., “Our European peers are already moving on this, and regulators there see it as essential; we don’t want to be caught flat-footed if U.S. regulators or courts adopt the same view.” By educating now, you create buy-in. Notably, boards themselves are being put on notice that cyber risks are their responsibility too, so many board members will want to know about quantum risk once it’s on their radar. Don’t wait for them to ask; proactively brief them. This can secure the top-down support needed to actually execute a multi-year cryptography overhaul.

To circle back to the key question: Who is liable if you don’t prepare for post-quantum and disaster strikes? Potentially you, your company, and everyone up the chain who ignored the warnings. But the flipside is that liability is avoidable – by exercising reasonable care starting now, you can dramatically reduce both the chance of a breach and the blame if one occurs. In the end, preparing for post-quantum security isn’t just an IT upgrade; it’s a form of legal and reputational insurance.

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap