Post-QuantumAI SecurityCyber-Kinetic Security

Cybersecurity Negligence and Personal Liability: What CISOs and Board Members Need to Know

Could I personally be sued or fined if our company gets breached?” This uneasy question is crossing the minds of many CISOs and board members lately. High-profile cyber incidents and evolving regulations have made it clear that cybersecurity is not just an IT problem – it’s a corporate governance and legal liability issue.

Defining “Reasonable” Cybersecurity: From Learned Hand to Global Standards

What does it mean to take “reasonable” precautions in cybersecurity? Different legal systems articulate this standard in different ways, but they share a common core: organizations (and their leaders) are expected to balance the burden of security measures against the likelihood and potential severity of cyber incidents.

In the U.S., this principle was famously captured by Judge Learned Hand’s negligence formula. In a 1947 case, Hand explained that a duty to take precautions exists if the burden of a safeguard (B) is less than the probability (P) of an accident times the gravity of harm (L). In plain terms, if a security measure is cheaper than the expected loss from a breach, failing to implement it can be deemed negligent and a breach of the duty of care. This Hand rule is essentially a cost-benefit test for reasonable care, and while juries aren’t explicitly taught the formula, U.S. courts use it to guide what an “ordinary careful person” (or company) would do. In practice, it means companies should eliminate “excessive, preventable dangers” by implementing safeguards that are not grossly disproportionate to the risk. For cybersecurity, think of examples like patching a critical software vulnerability: if a patch is readily available (low burden) and the risk of a breach is high, not patching would fail the Learned Hand test for reasonable care.

Outside the U.S., legal systems also weigh risk versus cost, albeit without a neat algebraic formula. English law uses the “reasonable person” standard, which similarly considers “the likelihood of harm, the severity of potential harm, and the burden of taking precautions”. A classic illustration is Bolton v. Stone (1951), where a cricket club was not found negligent when a ball struck a passerby outside the ground, because such an accident was exceedingly unlikely and the club had taken some precautions (a fence). The House of Lords held that a breach of duty occurs only if the defendant’s actions fall below what a reasonable person would do in those circumstances, accounting for the probability and gravity of harm versus the effort of prevention. In another case (Wagon Mound (No. 2)), a very small risk was deemed unacceptable because it could have been avoided with virtually no cost – underscoring that even UK courts implicitly ask whether precautions are grossly disproportionate to the risk. The upshot for cybersecurity in the UK (and similarly in other common law jurisdictions) is that organizations must take proportionate security measures: trivial measures should always be taken to prevent serious foreseeable harms, whereas extremely expensive measures might not be required for very improbable threats.

European Union regulations echo the reasonableness standard through language like “appropriate technical and organizational measures.” Under GDPR, for example, companies must implement security appropriate to the risk (considering factors like the state of the art and the cost of implementation). This is essentially a reasonable security test written into law. It aligns with the same principles – higher risk personal data processing calls for more robust safeguards. Notably, “reasonable” doesn’t mean 100% breach-proof – it means acting with due care and diligence expected from a prudent operator in your position. As we’ll see, failing to meet this standard can open the door to legal liability, including personal liability for executives and directors in charge.

The Rising Risk of Personal Liability for Cybersecurity Failures

For a long time, the fallout of cyber incidents was confined to corporate damages – the company might pay fines, face lawsuits, or suffer reputational harm, while individual officers were rarely held personally accountable. That era is ending. Globally, regulators and courts are increasingly willing to tag individual decision-makers – CISOs, CEOs, board members – with liability when egregious security lapses occur or breaches are mishandled. Personal civil suits and even criminal charges are no longer unheard of. This shift has sent a chill through boardrooms and C-suites. Below, we break down the trends in key jurisdictions, and how you as a security leader or director might find yourself in the crosshairs.

United States: Negligence and New Enforcement Approaches

In the U.S., personal liability in cybersecurity historically arose mostly in shareholder lawsuits after a breach. Directors could be sued for breaching their fiduciary duties – essentially, for negligence or lack of good faith in overseeing cyber risks. However, such cases (called derivative suits) have been hard for plaintiffs to win under the business judgment rule, unless there was clear ignorance or bad faith by the board. A notable turning point was the Yahoo data breaches. Yahoo suffered massive breaches from 2013 to 2016 that senior executives knew about but did not disclose or adequately address. Shareholders filed a derivative suit alleging that Yahoo’s officers and directors (including the CEO) breached their duty of care by failing to implement proper security and then covering up the breaches. In 2019, that suit settled for $29 million paid by Yahoo’s former executives and directors (the first monetary recovery in a data-breach derivative suit). The plaintiffs had strong allegations – they claimed Yahoo leadership knew of serious incidents and yet “failed to report them or implement appropriate security measures,” amounting to a reckless cover-up. This case set a precedent that boards can be held to account if they blatantly ignore cyber dangers or hide breaches. (Indeed, the Yahoo breaches knocked an estimated $350 million off Yahoo’s sale price to Verizon – a quantifiable loss to shareholders that underscored the directors’ failures.)

Beyond shareholder suits, U.S. regulators have dramatically stepped up individual enforcement. In late 2022, the Federal Trade Commission did something rare: it named a CEO personally in a data security enforcement action. The FTC’s complaint against Drizly (an alcohol delivery platform) and its CEO James Rellas alleged that the company’s lax security – for example, failing to require two-factor GitHub authentication or monitor for unauthorized access – led to a 2020 breach that exposed 2.5 million customers’ data. Crucially, the FTC pointed out that Drizly (and Rellas) had been warned about security problems two years prior but didn’t fix them. The resulting order not only requires the company to improve practices; it binds the CEO himself to implement a data security program at Drizly and at any future company where he is a major player. As the FTC’s consumer protection director put it, this ensures the CEO “faces consequences for the company’s carelessness” – a clear warning shot that executives personally need to take cybersecurity seriously.

Another watershed came in 2023 with actions by the U.S. Securities and Exchange Commission (SEC) and Department of Justice. The SEC, which oversees public company disclosures, charged the CISO of SolarWinds (Timothy Brown) with fraud in connection to the company’s 2020 supply-chain breach. This was the first time the SEC ever charged a cybersecurity executive individually. The SEC’s complaint says Brown made false statements about SolarWinds’ security in SEC filings and orchestrated a scheme to mislead the public about “industry-standard” practices that were not actually in place. Around the same time, the DOJ scored a high-profile conviction against Joe Sullivan, former Chief Security Officer at Uber, for covering up a 2016 breach. Sullivan was found guilty of obstructing justice and misprision of felony after he paid hackers as if for a “bug bounty” and failed to report the incident to the FTC during an ongoing investigation. This criminal case against a CISO sent shockwaves – it demonstrated that a mishandled incident response can cross the line into criminal conduct by an individual.

A landmark example of personal liability is the case of Uber’s former CISO, Joe Sullivan. In October 2022, a federal jury convicted Sullivan on charges of obstructing an FTC investigation and failing to report a felony, due to his role in covering up Uber’s 2016 data breach. Sullivan had quietly arranged a $100,000 payment to the hackers – masked as a “bug bounty” – and even had them sign nondisclosure agreements to keep the incident secret, all while failing to disclose the breach to regulators. In 2023 he was sentenced to probation and fined, but the case was precedent-setting – serving as a “loud alarm bell” that CISOs who conceal incidents can face criminal liability for their actions.

And it’s not just CISOs: in 2023 the SEC adopted new cybersecurity disclosure rules requiring senior management and boards to swiftly report material cyber incidents and periodically attest to risk management processes. False or negligent statements in these disclosures could trigger individual liability for CEOs, CFOs, or directors under securities laws (e.g. Sarbanes-Oxley certifications, anti-fraud provisions). State regulators like the New York Department of Financial Services are likewise introducing certification requirements (e.g. an annual CISO attestation of compliance with cyber regulations) – a false certification could result in enforcement against that officer personally. In short, the U.S. is moving towards a paradigm where if you’re the responsible officer signing off on cybersecurity, your name is on the line.

That said, how can you actually be sued by private parties? Consumers affected by breaches have tried to sue companies for negligence, but they face hurdles (like the economic loss doctrine and standing issues). Still, there’s an emerging tort concept of negligent cybersecurity. For instance, if a company fails to patch a known critical vulnerability for a long time and hackers exploit it, affected customers or business partners might claim the company (and conceivably its officers, if they were individually negligent) breached a duty of care. Plaintiffs would argue it was foreseeable that failing to fix such a security hole would cause harm, and that not fixing it was unreasonable under the Hand calculus (since the cost of patching was low compared to the risk). While personal liability for officers in such negligence cases isn’t common (usually the company is the defendant), plaintiffs may allege that executives were grossly negligent or willfully blind, potentially piercing the corporate veil. We’ve yet to see a CEO or CISO lose their house over a data breach lawsuit by customers – but the theories to pursue them are being tested.

The key lesson for U.S. CISOs and directors: regulators and courts won’t hesitate to single you out if they believe you ignored red flags, misled stakeholders, or failed to exercise due care in preventing and responding to breaches. The days of hiding behind the corporate entity are numbered when egregious facts emerge. As one attorney put it, we’re seeing “a growing risk of personal civil and criminal liability in connection with data breaches and related disclosures” for those on the front lines. The prudent course is to document your diligence, ensure accurate public statements about security, and cultivate a culture of transparency (so that if a breach happens, you report and respond lawfully). If you do that, you’re not only protecting the company – you’re also protecting yourself.

United Kingdom: Duties of Care and Diligence for Directors

In the UK, personal liability for cyber negligence tends to flow from directors’ statutory and common-law duties. Under the Companies Act 2006, directors must exercise reasonable care, skill, and diligence in the discharge of their duties (Section 174) and act in a way that promotes the success of the company (Section 172). Failing to manage cyber risks could be seen as a breach of these duties. For example, a board that ignores repeated warnings about outdated software or doesn’t implement basic cybersecurity controls might be falling short of the reasonably diligent person standard required of directors. The UK standard considers the knowledge, skill, and experience that the particular director has (or ought to have) – meaning a tech-savvy board member might be held to a somewhat higher bar on cybersecurity oversight than a layperson. If directors breach their duties, the company itself (or shareholders via a derivative action) can sue them to recover losses.

In practice, UK boards haven’t (yet) been slammed with massive personal verdicts for cyber incidents, but the risk is real and growing. U.K. government reports over the last decade found many boards were insufficiently engaged in cybersecurity, which could set the stage for allegations of negligence if a big breach occurs.

Notably, the UK’s Information Commissioner’s Office (ICO) has powers that indirectly impose personal accountability. The ICO can (and does) demand that senior executives or board members sign personal undertakings to improve practices after a data protection breach. This isn’t exactly a fine on the individual, but it puts their name on a compliance commitment – and if that promise is broken, it could lead to further enforcement. In extreme cases, directors could even face disqualification. For instance, if a company’s breach is caused by consent or connivance of a director, or attributable to their neglect, the director can be personally prosecuted under section 198 of the Data Protection Act 2018 (this typically applies to criminal offenses under the Act, like unlawfully obtaining personal data, rather than for being hacked – but a director who willfully ignores data protection obligations might hit that threshold). There haven’t been high-profile criminal convictions of company directors for being hacked, but directors of smaller firms have been prosecuted for egregious privacy law violations (e.g. nuisance calling companies’ directors have been personally fined when their businesses breached marketing laws). This shows a willingness of UK authorities to go after the individual when they suspect the corporate offense stems from that individual’s decisions.

Additionally, regulated industries in the UK impose personal accountability through governance rules. Financial services are a prime example: regulations from the Financial Conduct Authority (FCA) require firms to have robust risk management. Under the Senior Managers and Certification Regime (SMCR), specific senior managers have prescribed responsibilities (which could include operational resilience and cyber). If a bank fails to manage cyber risks and it’s found that the senior manager responsible did not take “reasonable steps” to prevent the failure, they can face regulatory action – including fines or bans – personally. This aligns with a broader trend: treat cybersecurity like other compliance areas where named individuals are accountable (similar to how a Money Laundering Reporting Officer can be personally liable for AML failings).

From the litigation angle, the UK has seen some class actions and group litigation orders after breaches (for example, claims for misuse of private information or negligence). These are usually against the company as defendant, but one could imagine creative claimants trying to rope in directors if there was evidence of direct mismanagement. Under UK law, proving a director personally owed a duty of care to customers in a cyber context is challenging (directors’ duties are primarily to the company). However, if directors knew of a vulnerability and deliberately chose to ignore it, affected parties might attempt to sue for “gross negligence” or even deceit (if misrepresentations were made). An example: in the TalkTalk 2015 breach, the CEO faced intense criticism for security failings; while she wasn’t personally sued, the incident led to inquiries about board-level responsibility. The reputational fallout alone can end careers, even absent formal liability.

In summary, UK board members and executives should view cybersecurity as part of their core governance duties. The standard of care expects them to stay informed about cyber risks, seek expert advice when needed, and ensure appropriate measures are in place. As one legal analysis put it, directors who ignore cyber risk “may not reach the standard” of care expected and could be found in breach of their duties. To protect themselves, boards should document their cyber oversight: regular agenda items on security, engaging external audits, tracking remediation of identified issues, and generally demonstrating diligence. If a breach does happen, a paper trail showing the board took reasonable steps can be a strong defense (and may prevent the breach in the first place!). Conversely, a board that treated cybersecurity with indifference may find itself facing not only corporate penalties but the ire of shareholders and regulators directed squarely at them as individuals.

European Union: Regulatory Pressure and Management Accountability

Across the EU, personal liability for cybersecurity lapses has been a quieter theme historically, as regulators focused on penalizing companies. The General Data Protection Regulation (GDPR) introduced headline-grabbing fines (up to 4% of revenue) for companies, but it does not typically impose fines on individuals for security breaches. However, this doesn’t mean executives and directors are off the hook in Europe. Two developments are worth noting: (1) member state laws that can hold individuals liable, and (2) new EU directives like NIS2 that explicitly call out management accountability.

Firstly, many EU countries’ laws provide that if a company commits a violation (such as failing to implement required security measures), and this occurred due to a manager’s intent or gross negligence, that manager can face consequences. For example, under Germany’s IT security laws, if a company in a critical sector fails to meet cybersecurity obligations, the executives could be fined or even face criminal liability if non-compliance is willful. In France, while fines for data breaches are levied on companies, there are criminal provisions (in the French Data Protection Act) that can be used against individuals who obstruct investigations or recklessly violate certain orders. Importantly, data protection authorities in some EU states have not shied away from naming and shaming company officers in their decisions, even if the fine is technically on the firm.

The bigger game-changer is the EU NIS2 Directive (Directive (EU) 2022/2555 on Network and Information Security, the successor to the original NIS Directive). NIS2, adopted in late 2022, significantly ups the ante on cybersecurity for a wide range of “essential” and “important” entities (from energy and healthcare to digital providers). Crucially, NIS2 has specific provisions about management bodies. It requires that companies’ management (i.e. directors or equivalent) approve cybersecurity risk measures and can be held accountable for failing to comply. The directive stops short of saying “imprison the CEO for a hack,” but it empowers national regulators to impose personal sanctions on managers. For instance, Italy’s implementation of NIS2 provides that authorities may temporarily suspend managers or directors from their roles if they grossly fail to implement required cybersecurity measures. Under that law, a director of an essential service provider who doesn’t remedy known deficiencies could be declared unfit to manage until things are fixed. This is a dramatic development: it directly threatens individuals with career consequences (and public reputational damage) for non-compliance. Other countries may implement similar sanctions – NIS2 gives leeway to member states to set penalties, which could include fines on individuals or director disqualification in serious cases.

Beyond NIS2, the GDPR itself indirectly pressures leadership. Data regulators have explicitly highlighted the role of boards in fostering compliance. The UK ICO (when it was part of the EU regime) noted that it can require personal undertakings from senior officials to improve data security. And consider the 2018 British Airways and Marriott breaches: the ICO fined each company ~£20 million under GDPR for inadequate security. In those cases, while individuals weren’t fined, the investigations scrutinized whether management followed industry best practices. In Marriott’s case, the breach originated in Starwood’s systems years before Marriott acquired it – yet regulators faulted Marriott for not doing enough due diligence and for failing to monitor and secure the merged network in the years after. The message is that top leadership is expected to proactively guard against known risks (even ones that pre-date your tenure or acquisition).

The European approach can be summarized as “organizational liability with personal accountability behind the scenes.” If a company is hit with a GDPR or NIS2 fine, you can bet the CEO and possibly the board will face internal fallout or removal by shareholders. Moreover, Europe’s emphasis on corporate governance in new regulations (like the Digital Operational Resilience Act for financial entities, or upcoming AI Act provisions) often explicitly ties senior management to compliance. For example, banks in the EU under DORA will have to have management sign off on cyber plans. Repeated failure could trigger fit-and-proper person reviews for those managers.

In short, EU executives and directors cannot be complacent. They should treat cybersecurity compliance as mission-critical, not just to avoid company fines but to avoid personal career jeopardy. To drive this home, NIS2 even mandates that management must be trained in cybersecurity – acknowledging that boards need to know their stuff to govern effectively. A director who remains willfully ignorant of cyber risks may not only harm their company but also expose themselves to regulatory action.

A final note: Some European jurisdictions allow civil lawsuits against directors in limited cases. For instance, under Italian law, if a company’s lack of security measures leads to damages, a customer or partner might attempt to sue the executives under general tort principles (though they’d have to overcome the hurdle that the duty was the company’s). While such lawsuits are rare, they become more plausible if evidence shows a particular executive personally engaged in wrongdoing (e.g. ordered IT to ignore security to save money, resulting in a breach). As Europe’s legal landscape evolves, personal liability for cyber governance failures may become more explicit. Already, the tone is shifting from “company should pay” to “those in charge should answer for their decisions.”

Beyond the West: Personal Accountability in Other Jurisdictions (Case Study: Singapore)

Personal liability for cybersecurity isn’t just a U.S./EU concern. Other jurisdictions are also moving to hold leaders accountable. A striking example is Singapore, especially in the financial services sector. Singapore’s Monetary Authority (MAS) has been ahead of the curve in stressing individual accountability. In 2021, MAS introduced Guidelines on Individual Accountability and Conduct, making clear that senior managers in key roles (including technology risk) will be held responsible for misconduct on their watch. Even before that, Singapore’s laws carried teeth: the Cybersecurity Act 2018 provides that if a company (especially a designated critical infrastructure operator) commits an offence under the Act, officers who consented to or negligently failed to prevent the offence can be personally prosecuted. In other words, if your company blatantly ignores a directive from the Cybersecurity Commissioner and that leads to an incident, you as the director or CISO could be criminally charged for not doing your part. Penalties can include fines or even jail in serious cases.

Singapore’s Personal Data Protection Act (PDPA) similarly has provisions that if a corporate offense (like failing to protect personal data) is attributable to a director’s neglect or connivance, that director can be charged. So far, enforcement of PDPA has mostly seen fines on companies, but the law is there. Moreover, regulators in Singapore have not hesitated to call out individuals. In one notable incident, after a major data breach at a university, the data protection authority specifically named an IT manager as having failed in his duties (though the fine was levied on the university). This naming-and-shaming approach signals that individuals are in the spotlight.

In the financial sector, MAS can issue prohibition orders to ban executives from working in finance if they are found to have gravely mishandled risk management. For example, if a bank suffers a breach due to clear oversight failures, MAS could potentially bar the responsible director or officer from serving in any similar capacity for a period. The Singapore Exchange (SGX) too can demand that directors undergo training or even step down if a listed company has serious control failures. These are very direct personal consequences.

The expectation in Singapore is that directors actively oversee cybersecurity and stay educated on the topic. Simply saying “I’m not a tech person” is not a defense. As the Bird & Bird cybersecurity report in Singapore put it, “it is unlikely that Directors and executives will be able to disclaim their liability by passively discharging their duties” – they must be sufficiently familiar with cyber risks to ask the tough questions and ensure proper action. The Companies Act in Singapore, like the UK, requires reasonable care and diligence from directors. A director who never inquires about the company’s cyber preparedness, never budgets for security, and ignores the IT team’s warnings could be found in breach of that duty if things go wrong.

Other jurisdictions in Asia-Pacific have analogous trends. Australia, for instance, has seen regulatory guidance that cyber resilience is a board responsibility, and under corporate law a director’s duty to act with due care could encompass cybersecurity oversight. In 2022, after a series of high-profile breaches in Australia, there were public calls for stronger personal accountability (though formal legal changes are pending). Meanwhile, India is considering a data protection law that includes potential criminal liability for certain data breaches, and Japan amended its Personal Information Protection law to increase penalties (though targeting companies). The global momentum is toward greater individual accountability as part of a mature cybersecurity culture.

Bottom line: Outside the traditional Western centers, the writing on the wall is the same – if you are a senior manager or director, you’re expected to ensure your organization doesn’t cut corners on cybersecurity. If you do, don’t be surprised if regulators decide to make an example of you in person. As a CISO or board member, it’s wise to familiarize yourself with the local cyber laws and regulatory expectations wherever your company operates. Personal liability can take different forms (fines, bans, even jail), but the best way to avoid all of them is to foster a strong security program and a documented culture of compliance and risk management from the top down.

When Time Catches Up: Long-Tail Consequences of Cyber Failures

One particularly sobering aspect of cybersecurity liability is the time lag. Decisions (or omissions) you make today might not lead to an incident immediately – it could be years before the failure manifests, but when it does, the fallout can be massive. This is the concept of the “long tail” of cyber risk. Unlike a car accident which has immediate effects, a security oversight (say, using an outdated encryption algorithm or not decommissioning servers properly) might sit latent until an enterprising attacker exploits it or an audit uncovers it. By then, those responsible might have moved on or even forgotten the issue – but the law and public will remember.

Let’s look at a few examples that highlight how lawsuits and enforcement can hit long after the original lapse, underscoring the need for persistent vigilance.

  • Yahoo (2013-2017) – We’ve already discussed Yahoo’s epic breach in the context of personal liability, but it’s also a poster child for delays. Yahoo suffered major breaches in 2013 and 2014 but didn’t disclose them until 2016. During those years, users’ data was in criminal hands and Yahoo’s leadership was (allegedly) aware of serious intrusions. By the time the truth came out in 2017, the company was in the final stages of being acquired by Verizon. The revelations of the old breaches directly led to a $350 million reduction in Verizon’s purchase price. Shareholder lawsuits followed, as did SEC penalties in 2018 for the failure to disclose in a timely manner. So, an initial security failure (circa 2014) resulted in corporate and personal consequences 3-4 years later. The lag likely made things worse: regulators and courts frowned on the fact that the company let the issue fester. One takeaway is that the clock doesn’t run out on accountability just because time passed – if anything, a cover-up or neglect over time can aggravate liability. As the plaintiffs in Yahoo’s case argued, every additional day of silence or inaction was a further breach of duty.
  • Marriott/Starwood (2014-2018) – In 2018, Marriott International disclosed a massive data breach affecting up to 500 million guest records in the Starwood reservations database. Shockingly, the breach had begun four years earlier in 2014, within Starwood’s systems (Starwood was a hotel chain Marriott acquired in 2016). The attackers had been lurking undetected, exfiltrating data from what was now Marriott’s asset. The fallout hit in 2018-2020: Marriott faced regulatory investigations across the globe, including the UK ICO which fined it £18.4 million under GDPR for failing to secure those systems. The ICO explicitly noted the time gap: the breach began before acquisition, but continued due to Marriott’s inadequate post-merger security monitoring. Additionally, Marriott was hit with class-action lawsuits in the U.S. and UK on behalf of customers; some argued that Starwood’s lax security (and Marriott’s failure to catch it) was negligent. Imagine being a Marriott executive who, in 2016, decided not to invest heavily in integrating Starwood’s IT – two years later, that decision is being second-guessed by regulators with 20/20 hindsight. The lesson here is successor liability: when you acquire or inherit systems, their old sins become yours. A vulnerability from years past can explode later. For boards, this highlights the importance of cyber due diligence in M&A – something that, if skipped, can give rise to claims well down the line when a dormant attacker finally trips an alarm.
  • Morgan Stanley (2015-2020) – Not all breaches are about hackers. Sometimes a failure to follow through on security basics can come back years later as a regulatory nightmare. Morgan Stanley learned this when in 2020 the U.S. Office of the Comptroller of the Currency fined it $60 million for “unsafe or unsound practices” in decommissioning old hardware. What happened? Starting in 2015, Morgan Stanley retired some data center equipment, hiring a vendor to wipe and resell the devices. Years later, it was discovered that thousands of those hard drives still contained unencrypted customer data – they hadn’t been properly wiped and ended up being auctioned off online. The bank scrambled to recover some drives, but many were gone. Regulators in 2020 called the firm’s failures “astonishing” – noting that the data was not encrypted despite the capability existing, and that the bank hadn’t overseen the vendor. This scenario shows how a lapse starting in 2015 (not turning on disk encryption, not supervising the disposal vendor) only surfaced in 2019/2020, leading to hefty penalties and lawsuits by states and private claimants. For CISOs and IT leaders, it’s a caution that security negligence can age like a time bomb: an oversight might lie hidden until an incident (in this case, an unrelated person finding a Morgan Stanley server on eBay) brings it all out. And regulators will treat the fact that it persisted for years as an aggravating factor, not an excuse.
  • Others (the long tail is everywhere) – There are many more examples. In the Target 2013 breach, the company had been warned about its point-of-sale malware risks weeks prior but didn’t act fast – the breach led to 2015 settlements with banks and card networks, and derivative suits against directors (ultimately settled) questioning their prior oversight. In the Equifax 2017 breach, a critical patch was not applied for about two months; the breach happened and by 2019 Equifax paid at least $700 million in fines and settlements – the decision not to patch in May led to punishment years later. We also see long-tail issues in industrial cybersecurity: a utility might overlook a vulnerability in SCADA systems for a decade, but if an incident occurs, investigations will pore over records going back years to find “who knew what, when.” Importantly, courts in some jurisdictions may extend liability if the harm was not immediately discoverable. For instance, some U.S. states have “discovery rule” for breach victims – the statute of limitations might only start when the victim learns of the breach (which could be years after the fact), meaning companies can face lawsuits long after the initial misstep.

What can leaders do about this? The obvious answer is don’t procrastinate on security fixes. If an audit or regulator identifies a weakness, address it promptly and document that you did. It might cost money now, but it can save you enormous pain later – potentially even saving your job or avoiding personal legal risk. Another strategy is to implement strong continuous monitoring. Many breaches that smolder for years (like Starwood’s) could be shortened if companies invest in detection. The shorter the dwell time of an attacker, the less likely that regulators will slam you for not knowing sooner. Also, maintain thorough records of your remediation efforts. If, despite your best efforts, something was missed and blows up years later, being able to show that “back in 2018 we assessed that risk and here’s why we made the decision we did” can help demonstrate you were not negligent or willful.

Lastly, consider cyber insurance and proper indemnification clauses – these won’t prevent a breach, but if a long-tail incident occurs, insurance might cover some of the delayed costs, and strong indemnification/advancement provisions can ensure your company covers your legal fees if you as an officer are named in a lawsuit. Yahoo’s case, for example, resulted in a $29M settlement paid by D&O (Directors & Officers) insurance on behalf of the individuals. That safety net exists because the individuals hadn’t engaged in fraud (in which case insurance might not cover). So, while you should do everything to avoid breaches, also prepare for the worst by having the right support in place should a long-tail incident come knocking.

Conclusion: Proactive Protection – for Your Company and Yourself

The evolving landscape of cybersecurity liability makes one thing abundantly clear: CISOs, executives, and board members must treat cyber risks as seriously as financial, legal, or safety risks. The law is increasingly expecting you to take reasonable, prudent actions to secure data and systems – and it will judge you with hindsight if you fail to do so. The days of saying “I didn’t know” or “we’re not a tech company” are over. Ignorance is not bliss; it’s liability.

How can you protect both your organization and your own neck? Here are a few guiding principles derived from the discussion above:

  • Embed Cybersecurity into Governance: Make cybersecurity a standing item in board meetings. Ensure there’s a clear line of reporting from the CISO to the board or a dedicated committee. When major IT initiatives or acquisitions are discussed, put cyber on the checklist (e.g. “Have we assessed the security of that new system or the company we’re buying?”). A documented trail of board deliberation and decisions on cybersecurity can be a powerful shield in court, demonstrating you exercised due care.
  • Know and Apply the Standard of Care: Stay informed about what regulators and industry standards consider “reasonable” security for your sector. Frameworks like ISO 27001, NIST CSF, or local standards give a baseline. While compliance with standards alone may not guarantee no liability, it’s strong evidence that you took recognized precautions. Conversely, if you’re straying from best practices, you should have a documented risk acceptance and rationale. Remember the Learned Hand logic: if a cheap control can prevent a likely disaster, you simply must implement it. As a director or CISO, think like a plaintiff’s lawyer for a moment – what is the most glaring thing they’d say you failed to do if a breach happened? Once you identify those, fix them now, before an incident.
  • Foster a Culture of Transparency and Incident Response: Many personal liability cases (Sullivan/Uber, Yahoo, etc.) have a common theme: the cover-up or misrepresentation got the individual in trouble more than the underlying hack did. Regulators will go easier on you if you respond to incidents forthrightly and responsibly. In fact, new laws require timely notification (GDPR’s 72-hour rule, SEC’s 4-day disclosure rule for material incidents, etc.). Train your team that hiding breaches is a career-ending mistake – not just for them but potentially for the CISO or CEO. Develop an incident response plan that includes escalation to the appropriate leadership and legal counsel immediately, so you don’t lose precious time or appear to be sweeping things under the rug.
  • Personal Risk Management: As an executive or director, ensure you’re covered by robust D&O insurance that includes cyber events. Given the FTC’s actions, also consider whether your indemnification covers regulatory investigations and fines (in some jurisdictions, fines aren’t indemnifiable, but legal defense should be). It’s also wise to keep yourself educated – attend that seminar on cyber law, read advisories (like this one!) and perhaps even get technical familiarization. Directors in Singapore are being encouraged to take courses on cybersecurity ; that’s a practice that could benefit leaders everywhere. Not only will it make you a better overseer, it will show that you are trying to fulfill your duty of skill and care regarding cyber risk.
  • Document, Document, Document: If you take one thing away, it’s the importance of documentation. Courts and regulators assess reasonableness largely through records. Meeting minutes that reflect discussions of cyber risk, internal memos about decisions (e.g. why you did or did not deploy a certain security solution), and audit reports with management responses all paint a picture of your diligence. If an incident happens and you face scrutiny, being able to produce a timeline of your proactive measures could make the difference between liability and vindication. And if nothing bad happens (hopefully), those same efforts will have improved your security posture and potentially prevented incidents – a win-win.

In the end, the goal of these legal shifts is not to scare executives (though it does have that effect) – it’s to incent better security outcomes. As a CISO or board member, embracing this responsibility is part of being a modern business leader. Yes, the stakes are higher now that your own bank account or freedom might be on the line, but with the right approach, you can drastically reduce those risks. Think of strong cybersecurity governance as not just protecting your company’s assets, but also the careers and reputations of everyone in the executive suite. In the cybersecurity realm, an old saying holds true: an ounce of prevention is worth a pound of cure. The “cure” (legal fallout) is costly and painful; the “prevention” (good security and governance) is challenging but ultimately far preferable.

So patch that system, brief your board candidly, test your incident response, and sleep a little sounder – knowing you’re doing right by your stakeholders and keeping yourself out of the next headline about CISO indictments. After all, as the legal standards across jurisdictions make plain, reasonable security is not just best practice – it’s becoming a personal mandate.

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap