The Work of AI Bullshit
The ideal technology for a generation unconcerned with truth or falsity
Two recent developments have converged to create what may be the defining technology of Gen Z: AI-powered bullshit machines. This isn't just speculation - it's backed by recent research into both AI language models and Gen Z's information consumption habits. The implications of this are far-reaching and dystopian.
Let's start with the AI side. ChatGPT and similar large language models can produce human-like text at scale, but with little concern for factual accuracy. As Michael Townsen Hicks, James Humphries, and Joe Slater argue in their recent paper, “ChatGPT is bullshit,” when these models make factual errors, it's not because they're “hallucinating” or lying. It's because they fundamentally don't care about truth — they're bullshitting.
The authors make a compelling case that “ChatGPT is bullshit” in the philosophical sense defined by Harry Frankfurt. They write:
We are quite certain that ChatGPT does not intend to convey truths, and so is a soft bullshitter. We can produce an easy argument by cases for this. Either ChatGPT has intentions or it doesn't. If ChatGPT has no intentions at all, it trivially doesn't intend to convey truths. So, it is indifferent to the truth value of its utterances and so is a soft bullshitter.
This maps surprisingly well onto how philosophers like Frankfurt have defined bullshit. The bullshitter's goal isn't to deceive about specific facts, but to convey a certain impression regardless of truth. As Hicks et al. put it:
ChatGPT functions not to convey truth or falsehood but rather to convince the reader of — to use Colbert's apt coinage — the truthiness of its statement, and ChatGPT is designed in such a way as to make attempts at bullshit efficacious.
However, they go further by introducing an important distinction between what they call “hard”and “soft” bullshit. They define these terms as follows:
Hard bullshit: Bullshit produced with the intention to mislead the audience about the utterer's agenda.
Soft bullshit: Bullshit produced without the intention to mislead the hearer regarding the utterer's agenda.
The authors argue that ChatGPT is, at minimum, a soft bullshitter. They write:
We are quite certain that ChatGPT does not intend to convey truths, and so is a soft bullshitter. We can produce an easy argument by cases for this. Either ChatGPT has intentions or it doesn't. If ChatGPT has no intentions at all, it trivially doesn't intend to convey truths. So, it is indifferent to the truth value of its utterances and so is a soft bullshitter. ChatGPT functions not to convey truth or falsehood but rather to convince the reader of — to use Colbert's apt coinage — the truthiness of its statement, and ChatGPT is designed in such a way as to make attempts at bullshit efficacious (in a way that pens, dictionaries, etc., are not).
The introduction of the soft bullshit category provides a way to describe and analyze the output of AI systems without anthropomorphizing them or attributing to them more complex mental states than they possess. This is particularly important as we grapple with the implications of AI-generated content in our information ecosystem.
While the authors do discuss the possibility of ChatGPT engaging in hard bullshit, they acknowledge that this is a more contentious claim:
The question of whether these chatbots are hard bullshitting is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions.
For the purposes of understanding the current state of AI language models, the concept of soft bullshit is more immediately applicable and less philosophically fraught. It allows us to recognize that these systems are producing content without regard for its truth value, even if we don't attribute to them an intention to deceive about their agenda. By recognizing AI output as soft bullshit, we can better understand its nature and potential impacts without getting bogged down in debates about AI consciousness or intentions.
The age of AI bullshit isn't just about the proliferation of misleading information. It's about a fundamental shift in how information is produced and consumed. As described by Adam Rogers in Business Insider, the researchers at Jigsaw found that Gen Z's approach to online information is best described not as “information literacy,” but as “information sensibility” — a “socially informed” practice that relies on “folk heuristics of credibility.”
Rogers writes:
Gen Zers know the difference between rock-solid news and AI-generated memes. They just don't care... They're outsourcing the determination of truth and importance to like-minded, trusted influencers. And if an article's too long, they just skip it. They don't want to see stuff that might force them to think too hard, or that upsets them emotionally. If they have a goal, Jigsaw found, it's to learn what they need to know to remain cool and conversant in their chosen social groups.
This behavior isn't just about laziness or short attention spans. It's a rational response to an information environment that's become too complex and overwhelming to navigate individually. As Rogers notes:
Young folks basically say they see no difference between going online for news versus for social interaction. Gen Zers approach most of their digital experience in what the researchers call ‘timepass’ mode, just looking to not be bored.
In other words, Gen Z's relationship to online information looks a lot like... bullshit. Not in a pejorative sense, but in Frankfurt's philosophical sense — a focus on impression and social positioning rather than truth.
The convergence of these two phenomena — AI bullshit generators and a generation reared to consume and spread bullshit — could prove to be a defining feature of the coming decades. We're entering an era in which machines can produce endless streams of plausible-sounding but potentially false engagement fodder, and where many people's primary mode of engaging with that information is to use it for social positioning rather than truth-seeking.
It’s worth emphasizing that bullshit, in Frankfurt's sense, has always been with us. Humans have always cared about social positioning. And information has always been used as much for signaling as for its literal content.
What's new is the industrial-scale production of bullshit by AI, combined with the social media landscape that has made signaling and affiliation a near-constant activity for many people. As Hicks et al. point out:
ChatGPT's text production algorithm was developed and honed in a process quite similar to artificial selection. Functions and selection processes have the same sort of directedness that human intentions do... If ChatGPT is understood as having intentions or intention-like states in this way, its intention is to present itself in a certain way (as a conversational agent or interlocutor) rather than to represent and convey facts.
The challenge will be learning to navigate this new landscape. How do we maintain a collective sense of truth and reality in a world awash in algorithmically-generated bullshit? How do we build systems and institutions that can function effectively when so much of our information ecosystem is dominated by social positioning rather than truth-seeking?
These aren't easy questions to answer. But recognizing the nature of the challenge is a crucial first step. We need to stop thinking of AI language models as entities that are trying to tell the truth but sometimes make mistakes. They're not. As Hicks et al. argue, they're bullshit machines, indifferent to truth and designed only to sound plausible.
Similarly, we need to recognize that for many people — especially younger generations — engaging with online information isn't primarily about determining truth. It's about navigating social dynamics and maintaining group affiliations. As Yasmin Green, Jigsaw's CEO, told Business Insider:
The old guard is like: ‘Yeah, but you have to care ultimately about the truth.’ The Gen Z take is: ‘You can tell me your truth and what you think is important.’ What establishes the relevance of a claim isn't some established notion of authority. It's the social signals they get from their peers.
This shift in how truth and relevance are determined has profound implications. As Rogers notes:
For Gen Z, the online world resembles the stratified, cliquish lunchroom of a 1980s teen movie. Instead of listening to stuffy old teachers, like CNN and the Times, they take their cues from online influencers — the queen bees and quarterback bros at the top of the social hierarchy. The influencers' personal experience makes them authentic, and they speak Gen Z's language.
Once we understand these realities, we can start to grapple with their implications. We can perhaps design better systems for separating truth from bullshit, which will aid at least a few discerning users. We can create educational approaches that teach not just critical thinking, but also how to navigate social dynamics without losing sight of reality — though if we haven’t done this already (and we haven’t), good luck doing it now.
But here's where things get bleak. In a world dominated by AI-generated bullshit and social-signal-driven information consumption, the ability to pay attention for a long enough period to discern and leverage truth becomes an increasingly powerful — and increasingly rare — skill.
Those who can effectively sift through the noise to find actual, verifiable truth have an enormous advantage. They're able to make better decisions, understand the world more accurately, and easily manipulate those chillaxing numbskulls who can't or won't engage in this difficult labor.
On the flip side, the “timepassers” who give up on truth-seeking — who assume it's all bullshit and decide to just play along, working the Post Hand (PH) and Goon Hand (GH) all the livelong day — may find themselves increasingly at the mercy of those who haven't given up. I’m sympathetic to their extreme passivity, but merely playing along means ceding more and more control to those who can spend the time and money to totally control our information ecosystem.1
This dynamic could exacerbate an already-stark divide: a small group of in-the-know overlords with outsized power and influence,2 and a much larger group who've given up on truth and are essentially playing an endless game of social positioning with AI-generated talking points.
As Hicks et al. warn:
Calling chatbot inaccuracies 'hallucinations' feeds in to overblown hype about their abilities among technology cheerleaders, and could lead to unnecessary consternation among the general public. It also suggests solutions to the inaccuracy problems which might not work, and could lead to misguided efforts at AI alignment amongst specialists.
It's not just about maintaining our collective grip on reality anymore. It's about who gets to shape that reality, and who's left simply reacting to it. The stakes couldn't be higher. And the truly terrifying part? It's likely that this ability to shape reality has “always already” slipped away from most of us.
Consider the implications of what Rogers reports:
Gen Zers told researchers they spend most of their digital lives in ‘timepass’ mode — engaging in light, obligation-free content... They don't read long articles. And they don't trust anything with ads, or paywalls, or pop-ups asking for donations or subscriptions.
If this is true, then the niche activity of engaging deeply with complex information — the kind of engagement necessary to discern truth from bullshit — is finally approaching extinction. The vast majority are already playing the “timepassing” game, focusing on social signals rather than truth as they post and goon their ways through life itself.
In this context, Hicks et al.'s note of caution takes on an even more ominous tone:
We need to stop thinking of AI language models as entities that are trying to tell the truth but sometimes make mistakes. They're not. They're [sophisticated] bullshit machines, indifferent to truth and designed only to sound plausible.
If we've already reached a point where most people are indifferent to the truth of what they consume and share online, and where the machines generating much of this content are equally indifferent to truth, then we have lost the battle for a shared, factual reality.3 After that, what’s left? Not much.
Here’s the tl;dr: the power to shape perception — and through it, reality itself — has been concentrated in the hands of those who understand this dynamic (or, perhaps more accurately, know enough to know how to exploit it). The rest of us jeremiad-writing oddballs are left to wander the wilderness of a world where truth is irrelevant, where social positioning trumps factual accuracy,4 and where the very tools we use to communicate exist primarily to befuddle, bamboozle, and bullshit the least of us.5
A Jordan Peterson deepfake, reading from this article
You’re not so much playing as getting played.
Of course, they likely don’t know enough to stave off their own declines and demises — at least not yet, and perhaps never — and no one save the good lord himself can account for Donald Rumsfeld’s “unknown unknowns.”
Perhaps that battle “always has been” lost, to reference the Ohio Asronaut meme.
My work on the groupchats bears a re-reading, or at least a first one.
The “least of us,” alas, also constituting “most of us.”
The nature of the design of LLMs is intent to satisfy a user, with a regard for that goal over accuracy. I hesitate to assign human intent to this, but sandbagging and sycophancy are an unavoidable byproduct of reinforcement learning.