Quantum Computing

Quantum Contrarianism

Having spent over three decades at the forefront of emerging technologies – from early AI research in the late 1990s and AI security work for defense in the mid-2000s, to co-authoring a book on AI’s societal impacts over a decade ago, contributing to security of early 5G designs for massive IoT in late 2000s, and securing cryptocurrencies since 2012 – I’ve observed a consistent pattern: whenever a technology enters the public spotlight, a chorus of contrarian voices rises in parallel. These skeptics challenge the hype, question the feasibility, and often predict failure. Such contrarianism can be healthy, forcing proponents to back up claims and address hard questions. However, it can also fall prey to logical fallacies or absolutist stances that overlook the historical arc of technological progress. Today, quantum computing finds itself at this familiar juncture – lauded as revolutionary by some, derided as overhyped or impractical by others.

The Familiar Pattern of Tech Contrarianism

Every new, transformative technology seems to go through a “hype cycle” – a burst of excitement and lofty promises – inevitably followed by a contrarian backlash. I witnessed firsthand how skepticism shadows innovation. Earlier in my career, AI was climbing out of an “AI winter,” a period when disappointment with past promises made many experts cynical. The very term “artificial intelligence” carried stigma. In fact, reports from that era noted that some researchers avoided calling their work “AI” for fear of being seen as “wild-eyed dreamers” peddling hype. Investors, too, were wary – an Economist piece in 2007 observed that many had been “put off by the term ‘voice recognition’ which, like ‘artificial intelligence’; is associated with systems that have too often failed to live up to their promises.” This contrarian climate forced AI proponents to prove real progress or rebrand their efforts (hence terms like “machine learning” or “analytics” gaining favor). Yet, as we now know, AI persisted and blossomed – by the 2020s we entered an “AI spring” with dramatic successes in image recognition, language translation, and game-playing, all built on foundations laid during those skeptical times. The contrarians of the 1990s and 2000s, who declared AI a mirage, were eventually answered by tangible results. AI did not achieve some of its most grandiose early visions (strong AI or human-like reasoning remains a work in progress), but it also far exceeded expectations in other ways once key breakthroughs emerged. In hindsight, measured skepticism helped temper excessive hype, but extreme pessimism (“AI has failed and will always fail“) was disproven by subsequent developments. The key lesson: technologies often take longer to mature than the initial hype indicates, but given time (and the resolution of intermediate technical hurdles), they can still deliver transformative impact.

I saw a similar pattern during the rollout of 5G wireless technology. In the late 2000s and early 2010s, as part of a team designing secure 5G architectures for future massive and critical IoT connectivity, I remember industry excitement about use-cases like autonomous vehicles, remote surgery, and ubiquitous smart devices. Alongside this optimism, there were plenty of doubters: Was 5G truly needed when 4G LTE was “good enough”? Could its millimeter-wave signals ever cover wide areas reliably? Would the promised ultra-low latency or huge device counts per cell actually materialize? Some colleagues and analysts predicted that 5G’s impact would be marginal outside of niche scenarios – or they fixated on theoretical downsides (even fringe conspiracy theories about health effects clouded public perception in some cases, a reminder that contrarian voices aren’t always grounded in evidence). My own stance initially was somewhat skeptical about 5G’s short-term usefulness beyond specialized applications (I publicly questioned it). And indeed, when early 5G deployments arrived, the hype outpaced reality – consumer handset speeds improved, but we did not immediately see self-driving cars directed by 5G or remote robotic surgery in every hospital. However, as more data came in and the ecosystem matured, even skeptics began to acknowledge real gains. I found myself “coming around” after seeing concrete 5G-enabled projects in healthcare and public services. Today, 5G is well on its way to becoming a backbone for IoT and advanced mobile services, and many of the dire predictions (“5G is overhyped and useless”) sound as shortsighted as the 3G skeptics before them. Once again, a healthy debate about limitations was beneficial – it spurred engineers to address coverage issues, for instance – but outright dismissal of 5G’s potential has aged poorly.

Cryptocurrency and blockchain technologies offer perhaps the most vivid example of contrarian vs. believer dynamics. I entered the crypto security field in 2012, back when Bitcoin was a curiosity mostly known to cryptographers and cyberpunks. In those early days, the dominant mainstream narrative was intensely contrarian: digital currencies were dismissed as a fad at best or a scam at worst. Financial experts lined up to declare Bitcoin “dead” (something they would repeat many times over the ensuing decade), while headlines fixated on illicit uses (Silk Road, Mt. Gox hack) to paint crypto as purely a tool for criminals. Indeed, by late 2013 when Bitcoin first hit $1000, skeptics loudly warned it was a speculative bubble unsupported by real value. Many governments and economists argued these currencies could never achieve widespread use or trust. Such skepticism was not without reason – early crypto was volatile and unregulated – but it often strayed into ridicule. Fast-forward to today, and while the crypto industry has certainly seen crashes and excesses, it has also matured in remarkable ways. What was initially met with skepticism a decade ago has overcome numerous challenges to become a widely recognized and adopted digital asset. Bitcoin and other cryptocurrencies are now held by major institutions, nations debate digital currencies, and blockchain technology is explored in everything from supply chains to secure communications. The contrarian voices have not disappeared – plenty of reasonable critics remain, pointing out ongoing issues like security breaches, fraud, or energy usage. However, the absolutist position that “crypto is worthless and will never amount to anything” has been decisively proven wrong by events. History shows that early dismissals of breakthrough tech often fail to account for how fast things evolve once a critical mass of talent and investment coalesce around solving the hard problems.

In all these cases – AI, 5G, crypto – contrarian skepticism served as a double-edged sword. On one hand, it was a useful counterbalance to hype, calling out unrealistic timelines and holding proponents accountable. On the other hand, if taken too far, it risked blinding observers to the technology’s long-term promise after the initial growing pains. As an innovator or policymaker, one learns to listen to skeptics (they are often experts with legitimate warnings), but also to contextualize their critiques within the broader arc of progress. Many “impossible” hurdles have eventually been overcome by research and engineering: for example, early neural network critics in the 20th century thought machine vision was intractable, yet today our smartphones can recognize faces and objects instantly – a feat that would astonish those past doubters. The pattern is clear: major technologies attract contrarians as they gain popularity, and their dialogue with visionaries ultimately shapes a more realistic, robust development path. With this perspective in mind, let’s turn to the current case of quantum computing.

Quantum Computing: Hype, Hope, and the Rise of the Contrarians

Over the last few years, quantum computing has leapt from obscure lab projects to headline news. You’ve likely seen the breathless claims: quantum computers will revolutionize everything – breaking unbreakable encryption, designing miracle materials and drugs, solving optimization problems that stump classical supercomputers, and basically changing computing forever. As someone involved in cybersecurity, I’ve paid special attention to the claim that quantum machines will one day crack our cryptography. There’s no doubt a lot of hype in this arena. Tech companies regularly announce record-breaking qubit counts, and optimistic timelines suggest impactful quantum applications could arrive in just a few years. Governments are pouring billions into quantum R&D. The phrase “quantum supremacy” (meaning a quantum computer solving a task no classical computer can) made headlines after Google’s 2019 experiment. In short, the excitement is palpable.

Naturally, this excitement has catalyzed a contrarian countercurrent. A small but vocal set of prominent skeptics has been making the case that the “quantum revolution” is further off – and will be more limited – than many believe. They argue that despite the theoretical potential, practical quantum computing faces profound obstacles that could take decades (if not forever) to overcome. Interestingly, some of these contrarian voices come from outside the core quantum research community, while others are leading figures within it who worry about hype getting ahead of reality.

One high-profile example occurred in late 2023 when Yann LeCun, a luminary in the AI field (head of AI research at Meta and a Turing Award winner), publicly poured cold water on quantum computing. LeCun bluntly stated he is less convinced of the possibility of actually fabricating quantum computers that are actually useful, calling the field a “fascinating scientific topic” but questioning its near-term impact. His remarks made headlines. To be fair, LeCun himself admits he’s not a quantum computing expert – his expertise is AI – yet contrarians eagerly seized on his skepticism as validation. This highlights a common dynamic: appeal to authority. Even when an authority’s domain lies elsewhere, their negative stance on quantum gets amplified. Another oft-cited authority is renowned mathematician Gil Kalai, who has spent years articulating why he believes scalable quantum computers might be fundamentally impossible. Kalai and a loose group of mathematicians and physicists argue that the very nature of quantum states (fragile superpositions that rapidly decohere) and theoretical complexity barriers could mean quantum computers will never reliably do the “complex choreography” needed for big computations. Some, like Kalai, propose that noise and error-correction requirements scale so badly that no feasible machine can overcome them. These are heavyweight intellectuals, and when they speak, the contrarian camp takes note.

Perhaps more striking is that insiders at top tech companies have also voiced caution. In a recent IEEE Spectrum feature, the head of quantum hardware at AWS, Oskar Painter, commented that there is currently a “tremendous amount of hype” in the quantum industry, and it can be “difficult to filter the optimistic from the completely unrealistic.” When an industry leader who is actively building quantum devices says that, it underscores that expectations may be too high. Painter and others point out a laundry list of challenges: today’s quantum processors are NISQ (Noisy Intermediate-Scale Quantum) devices, meaning they have dozens to maybe a few hundred qubits that are highly prone to errors. Without error correction, these machines can barely perform a calculation before noise derails it. Optimists hope NISQ devices might still do something useful via clever algorithms, but Painter notes a growing recognition that truly useful quantum computing will likely require full quantum error-correction – a feat that might be a decade or more away. In fact, one fundamental requirement for fault-tolerant quantum computing is to create logical qubits by encoding a single robust qubit into many physical qubits (to counteract errors). This overhead is enormous: estimates suggest 1 logical qubit may need 1,000 physical qubits or more, assuming those physical qubits are already quite high-quality. Given that the largest quantum chips today have on the order of a few hundred physical qubits, you can see why many experts say we’re very far from cracking this nut. Even Mark Horowitz, who chaired a U.S. National Academies panel on quantum computing, noted we’d likely need “about 100,000 times more qubits than we have today, and error rates 100× lower,” to build a general-purpose quantum computer – meaning “these machines are quite far away.” It’s no surprise, then, that the National Academies’ 2019 report injected a healthy dose of skepticism into the conversation, cautioning that contrary to sensational claims, quantum computers “will not completely replace classical computers anytime soon, if ever,” and that near-term impacts will likely be modest.

The technical arguments contrarians raise should resonate with any engineer: they revolve around scalability, error rates, decoherence times, and complexity. For instance, a contrarian might point out that quantum decoherence (the tendency of qubits to lose their quantum state when interacting with the environment) is a relentless enemy. Our best superconducting qubits stay coherent for mere microseconds under lab conditions. Each additional qubit and each required operation multiplies the opportunities for error. Quantum error correction, while theoretically possible, needs an overhead of qubits and operations that is staggering – and each of those operations could itself introduce new errors. This leads some to a rather extreme but not unheard claim: “perhaps building a large-scale, fault-tolerant quantum computer is not just hard, but fundamentally impossible.” In the Scientific American article, physicist Mikhail Dyakonov (a well-known quantum computing skeptic) is quoted arguing that controlling the huge number of continuous parameters in a many-qubit system might be beyond physical reality. He notes that a 1000-qubit ideal quantum computer has so many quantum parameters to tune and maintain that it would exceed the number of atoms in the universe – in his words, “you can’t keep them all under your control“, hence he believes the task is “impossible“. That’s a bold claim, and not a consensus view, but it’s illustrative of the contrarian extreme.

Beyond hardware limitations, quantum contrarians also target applications. A particularly pointed narrative I’ve heard (often delivered with a whiff of schadenfreude) is that “quantum computing is basically good for two things: breaking RSA encryption (Shor’s algorithm) and simulating quantum physics – and not much else.” Detractors suggest that outside those domains, every purported quantum advantage either doesn’t exist or can be matched by clever classical methods. Remarkably, even some quantum advocates partially agree on the current state of knowledge. For example, Microsoft’s quantum research lead Matthias Troyer co-authored a paper in 2023 analyzing where quantum speedups truly exist. His conclusion was that only a limited set of problems will see clear, exponential speedups from quantum algorithms – namely integer factorization (for cryptography) and certain quantum simulations in chemistry/material science. Troyer emphasized that many other tasks (optimization, machine learning, searching large databases, etc.), while they have some quantum algorithms proposed, tend to offer at best polynomial or quadratic speedups. And a quadratic speedup – say solving a problem in √N steps instead of N steps – “can quickly be wiped out” by the huge constant-factor overhead of operating a quantum computer. In fact, Troyer’s team showed that even with a hypothetical future quantum computer of 10,000 perfect logical qubits, a quantum algorithm with only quadratic improvement would need to run for centuries or millennia to beat a classical GPU on practical problem sizes. They dryly concluded that quantum computers will only really shine on small-data problems with exponential speedups. Skeptics often cite lines like that as a dagger in the heart of the hype: if true, it implies that aside from breaking encryption and simulating certain molecules/materials (important, but niche uses), quantum computers won’t outdo classical ones in the broad swath of tasks people care about in industry.

Some go further to argue that alternative technologies will outpace quantum. If classical high-performance computing and specialized accelerators improve fast enough, they claim, by the time quantum hardware is mature, the bar for any useful quantum advantage will have been raised out of reach. For example, quantum simulators (special-purpose classical or analog systems for physics problems) are already tackling tasks once hoped for quantum computers. And we see firms like NVIDIA achieving enormous speedups on AI and optimization problems with classical processors and clever software. This line of reasoning suggests a scenario where quantum breakthroughs arrive too late. It’s a pessimistic vision, painting quantum computing as perhaps a dead-end path that will never justify the immense effort and expense being poured into it.

As someone enthusiastic about the potential of quantum computing, I obviously don’t subscribe fully to these harsh views. But it would be disingenuous not to acknowledge that some parts of the contrarian critique are grounded in truth. Quantum computing is extraordinarily hard – perhaps “land a person on the moon” hard, or maybe “land a person on the sun” hard, as security expert Bruce Schneier wryly put it. We just don’t know yet which it will be. And it’s fair to say the field has overpromised at times. I’ve attended conference talks where, to be honest, the marketing slides probably did more harm than good by claiming imminent miracles. No seasoned engineer looks at the state of quantum hardware and believes it will seamlessly overtake classical computing in the next year or two. Even the champions of quantum exercise some caution. Computer scientist Scott Aaronson, known for his work on quantum complexity theory, often emphasizes that quantum computers won’t instantly revolutionize every field. As he recently said, “claims that quantum will revolutionize machine learning, optimization, finance, etc., always warranted skepticism” – and if more people are only now realizing that, his tongue-in-cheek response was “well then, welcome” to reality. Aaronson, notably, remains optimistic in the long term – he pointed out that even as skepticism grows, quantum research has made tangible progress. But he exemplifies a balanced viewpoint: excitement about breakthroughs tempered by frank acknowledgment of limitations.

In the quantum field today, the conversation between proponents and contrarians is active and necessary. It is somewhat reminiscent of the mid-2000s AI debates I recall, where one side would tout a flashy demo and the other would respond “yes, but it fails in these cases” or “it doesn’t scale.” Through that friction, the technology matured. The worst outcome would be if either extreme drowned out the other: if hype men ignored physics and kept overpromising, that could lead to disillusionment (and funding cuts) when reality doesn’t match the narrative. Conversely, if cynics completely dominate and declare “it’ll never work, stop trying,” they could prematurely stifle a field that, given another decade or two, might surprise us. It’s worth noting that even contrarians acknowledge the value of continuing research: recall Carver Mead’s comment – he was skeptical about the quantum computing concept, yet explicitly supportive of these people doing what they call quantum computing because “any time people try to build stuff that actually works, they’re going to learn a hell of a lot. That’s where new science really comes from.” In other words, even if the end goal is uncertain, the journey can yield valuable discoveries. That pragmatic take resonates strongly with me.

Contrarianism as a Double-Edged Sword: Value vs. Fallacies

From my experiences across AI, 5G, and crypto, I firmly believe contrarian voices are not only inevitable but indeed valuable in emerging tech. When quantum computing enthusiasts claim, for example, that we’ll crack world-changing problems in just a few years, skeptics provide a reality check by asking hard questions: How will you scale to millions of qubits? What about error correction? Where’s the rigorous evidence that algorithm X will outperform the best classical methods? This kind of scrutiny is healthy. It forces innovators to tighten their claims, conduct better experiments, and avoid Pollyannaish timelines. In the history of AI, contrarian critiques helped puncture marketing bubbles and directed attention to fundamental issues that needed solving. In crypto, security skeptics called out the early vulnerabilities and scams, prompting the community to professionalize and strengthen protocols. For quantum, I see a similar role: contrarians are helping ensure that quantum researchers don’t become complacent or deluded by their own hype. Welcoming skeptics into the discussion keeps the field honest and grounded. A quantum executive admitted that some in the industry “have exaggerated the near-term potential” and that “when you say quantum is going to solve all the world’s problems and then it doesn’t (at least not right away), that creates a letdown.” Such candor is in part thanks to contrarian pressure to not over-hype. Additionally, contrarians often compel proponents to articulate why they believe a technology will work despite current shortcomings, leading to clearer roadmaps and benchmarks. For instance, the quantum community today is much more explicit about the need for error-corrected qubits and exactly what milestones must be hit – a clarity that arose as a direct response to those questioning “how will we know when we’re close to something useful?”

However, contrarianism has its pitfalls, and it’s important for both skeptics and enthusiasts to recognize them. One trap is the appeal to false authority or selective authority. Just because a famous scientist opines negatively on a technology doesn’t automatically make that opinion gospel – especially if that person isn’t working directly in the field. We saw this with LeCun’s comments on quantum; they made waves due to his stature in AI, but one should weigh them against the views of actual quantum engineers (many of whom, while cautious, are far from giving up). Conversely, contrarians sometimes over-cite historic authorities (“Even Einstein doubted quantum mechanics!” or “Nobel laureate X says quantum computing is hype”) as if science doesn’t progress beyond the views of eminent individuals. This can border on the argument from authority fallacy if not backed by substantive reasoning. Technologies have a way of surprising even the experts – as the saying goes, “experts built the Titanic, an amateur built the Ark.” I wouldn’t go that far here (quantum engineering absolutely needs experts!), but the point is that authority alone is not proof. A healthy skepticism should be rooted in evidence and logic, not just deference to prominent naysayers.

Another issue is overstating current limitations as immutable. Yes, today’s quantum computers are extremely limited. But making grand pronouncements like “and they will never improve” is a risky bet. Skeptics in the 1980s could have said “neural networks can’t do much with a few dozen neurons, so this approach will never scale” – and they’d have been right about the 1980s, but spectacularly wrong about the 2010s once backpropagation, faster computers, and big data changed the game. Similarly, I recall in the early 2010s some telecom veterans were adamant that millimeter-wave signals (which 5G uses for high speeds) were too finicky to ever be useful outside labs. It’s true mmWave is tricky – it doesn’t travel far or through walls – but creative solutions like dense small-cell networks and beamforming have made it workable in certain scenarios, and 5G standards integrate multiple bands to mitigate that. The lesson: Never say never in tech. Progress is not guaranteed, but neither is stagnation.

One particularly dangerous form of contrarian hyperbole is what I’ll call pessimistic absolutism – ironically, a mirror image of the optimistic absolutism of overhype. This is when skeptics make categorical claims like “Technology X is a scam” or “It’s fundamentally impossible for X to ever work.” Unless backed by ironclad theoretical proof, such statements often don’t age well. They also shut down productive discourse. In other words, contrarians must be careful not to become dogmatic or they risk the same mistake as over-optimists: claiming to know the future with certainty. A healthy contrarianism should retain some humility and openness to being surprised.

In my personal reflections, I find it useful to differentiate constructive skepticism from destructive skepticism. Constructive skeptics want to make the technology better – they identify weaknesses and urge realism so that innovators address those weaknesses. Destructive skeptics, in contrast, dismiss or ridicule the technology outright, sometimes with an almost ideological bent. The former are invaluable allies; the latter can become mere naysayers offering little beyond cynicism. When I engage with quantum contrarians, I try to take the constructive bits and ignore the absolutist “it’ll never work” attitudes. The history of emerging tech is full of famous last words from skeptics proven wrong (from the 19th-century patent office head allegedly saying “everything that can be invented has been invented,” to IBM’s chairman in 1943 doubting a market for more than a few computers worldwide, to 1990s pundits saying the internet would be a trivial fad). This isn’t to laugh at skeptics – many of those quotes are taken out of context or apocryphal – but to illustrate that context matters. A skeptic in 1943 would be correct that vacuum-tube computers were impractical for widespread use; they could not foresee the invention of transistors and integrated circuits that changed the equation. Likewise, today’s quantum contrarians may be completely correct about the state of the art and even the next few years. Yet, unknown breakthroughs in qubit design, error correction, or entirely new quantum computing paradigms could shift what is possible.

Contrarianism’s Proper Role in Quantum Tech

So how do we strike the right balance? As someone working at the nexus of cybersecurity and emerging tech, my professional approach is to embrace contrarian critiques as a means to improve while guarding against their excesses. For the quantum computing field, this means we should welcome rigorous challenges: have third-party validation of quantum device claims, avoid misleading benchmarks, develop clear metrics and update the community honestly on progress versus those metrics. In fact, this is already happening: for example, IBM regularly reports on quantum volume improvements, and when Google claimed quantum supremacy in 2019, IBM’s researchers (playing the skeptic) quickly published an analysis showing the task could be solved classically if you used massive distributed computing – thus dialing back the initial claims. This back-and-forth is science at its best.

At the same time, communicators in the quantum field (myself included) must be careful to avoid the trap of false dichotomy. It’s not “quantum will change everything tomorrow” or “quantum is useless.” There is a rich middle ground: quantum computing can be both profoundly promising and extraordinarily challenging and limited in the near term. Conveying this nuanced picture to policymakers, investors, and the public is crucial. We have to be able to say: “This is a long-term quest, with uncertain outcome, but potentially huge payoff, and along the way we are likely to find other useful innovations.” This tempered message can get lost in the noise of hype vs. backlash. I often use historical analogies when briefing non-experts: for instance, I compare quantum computing now to where powered flight was in the early 1900s – we’ve seen a Kitty Hawk moment (quantum supremacy experiments), but we’re far from a transatlantic flight (a practical universal quantum computer). In the early days of aviation, some said “flying machines will never carry significant cargo or passengers.” They were right… until they weren’t, because the technology improved. But it took decades of incremental improvement and occasional leaps (jets, pressurization, etc.) I suspect quantum computing will follow a similar trajectory if it succeeds – incremental gains in qubit count and quality, maybe a big leap if someone invents a radically better qubit or error-correction method.

The current contrarian narratives in quantum, such as “it’s only good for Shor’s algorithm and simulations,” should be treated as challenges for researchers to prove otherwise over time, rather than as verdicts. It’s worth noting that even if quantum’s only applications were breaking encryption and simulating molecules, those are still hugely impactful in their respective domains (national security and pharmaceutical/chemical industries). But I doubt that’s the end of the story – one lesson from computing history is that once you give smart people a new tool, they eventually find unexpected uses for it. For example, nobody in the 1940s predicted that computers (then used for wartime codebreaking and ballistic calculations) would one day be used to stream video games or run social networks. Similarly, if and when quantum computers become robust, creative minds will likely discover algorithms and applications we can’t currently foresee. It’s already happening in small ways: quantum algorithms research is exploring ideas in machine learning (quantum ML), optimization, even quantum-supported sensing. Many of these are currently speculative or don’t beat classical methods – contrarians rightly point that out – but it’s early days for the “software” side of quantum as well. The key is to avoid blanket dismissals that cut off exploration. Yes, be skeptical of any claim that quantum will magically solve a problem with no evidence; but also recognize that our knowledge of quantum algorithms is still evolving. The door is still open.

Quantum computing today stands at a crossroads akin to where AI stood perhaps 15 years ago. There is incredible promise, genuine progress, but also a long road ahead and many loud voices doubting it will ever deliver. Contrarianism in quantum tech, as in any tech, is best viewed as a tool, not a truth. It’s a tool for questioning and refining the narrative, for ensuring we don’t delude ourselves. But it is not the final truth of what the technology will or will not achieve – that truth will be revealed only through continued research, engineering, and yes, a bit of imagination. As the contrarians often remind us, “extraordinary claims require extraordinary evidence.” They are correct. The extraordinary claim is that quantum computing will transform computing; the burden is on us in the field to provide evidence, step by step, that this claim can be realized. Until then, skepticism keeps us honest.

However, I’ll finish with this thought: skepticism, too, should be kept honest. It must be evidence-based and ready to adapt as new facts emerge. The best contrarians, in my view, ultimately have the same goal as the optimists – finding the truth and advancing knowledge – they just take a different rhetorical approach.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven professional services firm dedicated to helping organizations unlock the transformative power of quantum technologies. Alongside leading its specialized service, Secure Quantum (SecureQuantum.com)—focused on quantum resilience and post-quantum cryptography—I also invest in cutting-edge quantum ventures through Quantum.Partners. Currently, I’m completing a PhD in Quantum Computing and authoring an upcoming book “Practical Quantum Resistance” (QuantumResistance.com) while regularly sharing news and insights on quantum computing and quantum security at PostQuantum.com. I’m primarily a cybersecurity and tech risk expert with more than three decades of experience, particularly in critical infrastructure cyber protection. That focus drew me into quantum computing in the early 2000s, and I’ve been captivated by its opportunities and risks ever since. So my experience in quantum tech stretches back decades, having previously founded Boston Photonics and PQ Defense where I engaged in quantum-related R&D well before the field’s mainstream emergence. Today, with quantum computing finally on the horizon, I’ve returned to a 100% focus on quantum technology and its associated risks—drawing on my quantum and AI background, decades of cybersecurity expertise, and experience overseeing major technology transformations—all to help organizations and nations safeguard themselves against quantum threats and capitalize on quantum-driven opportunities.
Share via
Copy link
Powered by Social Snap