Leadership

Science Confirms What Large Corporate Survivors Already Knew – Organizational Bullshit Makes You Worse at Your Job

Sometimes the most interesting papers have nothing to do with quantum mechanics. Sometimes they validate everything you’ve been ranting about for years.

Regular readers will know that organizational bullshit is a topic I’ve been fascinated by – and personally scarred by – for years. I wrote about it at length back in 2022, drawing on the foundational academic work by Frankfurt, Spicer, McCarthy, and others, and making the case that we need to systematically free organizations from the stuff. I even built a Quantum Technobabble Generator for PostQuantum.com, essentially a bullshit generator for our own industry, because I figured that if people could see how easy it is to produce impressive-sounding quantum nonsense algorithmically, they might think twice before falling for it in a vendor pitch.

So when a Cornell researcher independently builds his own corporate bullshit generator, uses it to develop a scientifically validated psychometric scale, and publishes findings that confirm everything I suspected during 30+ years in Fortune 500 and Big 4 consulting – you can imagine my reaction. It was somewhere between vindication and a Vietnam flashback.

The paper is Shane Littrell’s “The Corporate Bullshit Receptivity Scale,” freshly published in Personality and Individual Differences and already making waves everywhere from The Register to Inc.com. And the findings are, to put it mildly, devastating.

(Full paper: Littrell, S. (2026). The Corporate Bullshit Receptivity Scale: Development, validation, and associations with workplace outcomes. Personality and Individual Differences, 255, 113699. https://doi.org/10.1016/j.paid.2026.113699)

What the CBSR Actually Measures

Here’s the setup. Littrell built a “corporate bullshit generator” in Excel (I used AI for mine, but the principle is identical) – an algorithm that takes the syntactic structure of real Fortune 500 executive quotes and fills them with randomly selected buzzwords from annual reports and industry publications. The result: statements that are grammatically correct, stylistically authentic, and semantically meaningless.

My personal favorite from the paper: “Working at the intersection of cross-collateralization and blue-sky thinking, we will actualize a renewed level of cradle-to-grave credentialing and end-state vision.”

If you read that and thought, “Hm, that’s not bad,” – I have some uncomfortable news for you.

Over 1,000 working adults across four studies were asked to rate these algorithmically generated sentences alongside real quotes from actual business leaders on how much “business savvy” they expressed. The resulting scale cleanly separates into two factors: receptivity to corporate bullshit (the fake stuff) and receptivity to genuine corporate speech (the real stuff). And the gap between the two tells you something important about a person’s cognitive profile.

The Uncomfortable Findings

People who scored high on the Corporate Bullshit Receptivity Scale – those who found the generated nonsense genuinely impressive – were significantly more likely to:

  • Score lower on measures of analytical thinking and fluid intelligence
  • Perform worse on workplace decision-making tests (situational judgment tests, commonly used in hiring)
  • Rate their bosses as more “transformational” and “visionary”
  • Feel more inspired by corporate mission statements
  • Engage in persuasive bullshitting themselves

That last point deserves emphasis. The research confirms what many of us suspected: bullshit is a closed loop. People who fall for it are more likely to produce it. And people who produce it create organizational environments where more people fall for it. It’s a self-reinforcing flywheel – one that, left unchecked, drives out the analytically sharp workers who see through it and promotes those who don’t.

Littrell’s study also tested the CBSR against Pennycook’s widely used Pseudo-profound Bullshit Receptivity Scale (the one that uses Deepak Chopra-style nonsense). In predicting workplace decision-making, corporate bullshit receptivity was a stronger and more robust predictor than pseudo-profound bullshit receptivity. Context matters. The boardroom has its own breed.

The Flashback

I read this paper with a mix of recognition and mild PTSD.

I spent over 30 years in large consulting firms and Fortune Global 500 environments. And I can tell you from lived experience that Littrell’s findings aren’t just statistically significant. They’re autobiographical for anyone who has sat through enough partner meetings, strategy offsites, or “transformation” pitches.

In the consulting world, I watched people build entire careers on their ability to bullshit convincingly, in the complete absence of any other useful skill. The corporate bullshit flywheel was the career ladder. The people who could “pressure-test adaptive coherence” with a straight face got promoted. The people who said, “I don’t understand what you just said, and I don’t think you do either” got managed out – or, if they were lucky, shuffled into a technical role where they couldn’t embarrass anyone at the client dinner.

The paper’s finding that bullshit-receptive employees rated their supervisors as more “transformational” and “visionary” hits especially hard. How many leadership reputations have been built not on actual vision but on the audience’s inability to tell the difference between vision and vapor?

Organizational Bullshit Is Getting Worse, Not Better

This is the part that should concern anyone in an industry, like quantum computing or cybersecurity, that depends on technical precision and honest assessment of risk.

Despite nearly two decades of academic work defining, measuring, and warning about organizational bullshit (I wrote about this myself back in 2022, building on the foundational work by Spicer, McCarthy, Frankfurt, and others), the problem is getting worse. A recent study in Review of Communication found that organizational identification – how strongly employees identify with their company – is positively associated with all three factors of organizational bullshit: disregard for truth, bullshit language, and the boss factor. The more you belong, the more bullshit you accept and perpetuate. Which, if you think about it, is a depressing organizational dynamic.

And I see it creeping into our own space. Quantum computing and cybersecurity are both bullshit-fertile environments: high complexity, low public literacy, enormous hype cycles, and a lot of money sloshing around looking for narratives to attach to. The quantum industry already has its own strain of what I call Q-FUD (Quantum Fear, Uncertainty, and Doubt) where vendors, consultants, and even some governments exaggerate threats or capabilities to create urgency that isn’t warranted by the underlying science.

When someone tells you they can “actualize quantum-safe transformation across your enterprise attack surface,” you’re hearing the corporate bullshit generator in real time. You just might not have had the scale to measure it. (Though if you want a taste of how this works in our field specifically, go spend five minutes with the Quantum Technobabble Generator. The output is disturbingly close to things I’ve heard in actual vendor presentations.)

The AI Amplification Problem

Here’s where things get interesting, and where I genuinely don’t know how this will play out.

Merriam-Webster named “slop” as its 2025 Word of the Year, referring to the deluge of low-quality, AI-generated content clogging inboxes and feeds. This isn’t a coincidence. Large language models are, by their very architecture, corporate bullshit generators. They are trained on enormous corpora of text that includes decades of annual reports, consultant slide decks, management bestsellers, and LinkedIn posts. They have internalized the syntax, rhythm, and vocabulary of corporate speak at a depth that no individual human bullshitter could ever achieve.

The question is: will AI reduce organizational bullshit by enabling clearer, more direct communication? Or will it amplify it by giving everyone a frictionless tool for producing syntactically perfect, semantically empty prose at industrial scale?

I suspect the answer, at least in the short term, is amplification. When you ask an AI to “draft an executive summary” or “write a leadership update,” the default output gravitates toward exactly the kind of buzzword-laden, impressively vague language that Littrell’s generator was designed to mimic. The AI doesn’t know it’s bullshitting. It has no regard for truth because it has no concept of truth – it’s optimizing for what sounds like the kind of thing that should come next, based on statistical patterns of what humans have written before. And what humans have written before is, in many corporate contexts, a lot of bullshit.

There’s a real possibility that AI will create a new superclass of corporate bullshit, what I’d call industrial-grade bullshit, that is more polished, more confident, and harder to detect than anything a human could produce unaided. If Littrell’s research shows that people already struggle to distinguish algorithmically generated corporate nonsense from real executive speech, imagine what happens when the algorithm gets better. And has access to your company’s internal style guide. And can produce it in twelve languages.

On the optimistic side, there’s a scenario where AI tools become bullshit detectors rather than bullshit generators. Imagine an AI assistant that flags vague language in executive communications, that highlights when a corporate memo is semantically empty, or that scores meeting transcripts on a CBSR-equivalent scale. If Littrell can build a bullshit generator in Excel, someone can build a bullshit detector in an LLM. The technology is symmetric – the question is which direction we point it.

Why This Matters for Quantum and Cybersecurity

In our world, the cost of bullshit isn’t just wasted time in meetings. It’s misallocated security budgets. It’s CISOs making decisions about cryptographic migration based on vendor presentations that sound impressive but say nothing. It’s national policymakers setting quantum strategies based on hype rather than hardware realities. It’s boards approving “quantum-safe transformation” programs that are neither quantum-safe nor transformative.

The CBSR research confirms something practitioners know intuitively: that the people most susceptible to impressive-sounding nonsense are also the worst decision-makers. In cybersecurity, bad decisions have consequences measured in breached data, regulatory penalties, and operational disruption. In the quantum transition specifically, we’re talking about national security timelines and the integrity of financial infrastructure.

We cannot afford a cryptographic migration strategy built on “synergistic thought leadership” and “architecting to potentiate on a vertical landscape.” We need precise technical language, honest capability assessments, and leaders who can tell the difference between signal and noise — even when the noise comes in a very well-designed slide deck.

The Bottom Line

Shane Littrell has given us something genuinely useful: a scientifically validated reminder that corporate bullshit isn’t harmless. It correlates with worse analytical thinking, worse decision-making, and a self-reinforcing organizational culture that rewards style over substance.

The fact that this paper exists, and that it made headlines in 2026, tells me two things. First, that organizational bullshit is now too pervasive to ignore, even for academics who normally study more “serious” topics. Second, that we still haven’t figured out how to fix it, despite knowing exactly what it is, how it works, and what it costs.

My own prescription hasn’t changed much since I first wrote about this. Comprehend it, recognize it, act against it, prevent it. But I’ll add one new item for the AI era: audit your AI outputs with the same skepticism you’d apply to a junior consultant’s first slide deck. If it sounds impressive but you can’t explain what it means, it’s probably bullshit – whether a human wrote it or a machine did.

And if you want to test yourself? Go read the ten CBSR statements in the paper. If you find yourself nodding along to “sunset our resonating focus,” it might be time for some self-reflection.

Or at least a career change out of consulting.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap