The computer doesn't care if you're a numbskull. That's what makes it dangerous. The GIGO principle existed long before Alan Turing was potty-trained. Feed it garbage questions, it spits back garbage answers with the same straight face. A data scientist I know complains her AI assistant fabricates numbers in spreadsheet analysis she’s doing with it. She misses the fakes, then blames the machine. The machine just nods along.
This is the trap waiting for most everyone rushing to embrace artificial intelligence. The tools work best for people who need them least — the ones who already know enough to spot the mistakes. A lawyer who understands contracts can use AI to draft faster. A writer who knows his craft can use detailed prompts to generate ideas and write in various styles. The rich get richer. But hand these same tools to novices and watch the havoc spread.
I see it everywhere in the content game. Editors and directors dream of replacing writers with AI, telling it "write me a story about the mayor's speech" without having to check the facts, polish the prose, or even read a transcript of the speech (which they’ll of course forget to add, not realizing you need to control the prompt as much as possible).1 They imagine cutting out the human entirely. But this kind of work needs judgment. Writing and art require a modicum of taste. The machine can generate words by the million, but someone still has to read those words in order to rake gems from the rubbish.2
The faster and better organized you are as a thinker, the more useful AI becomes. Know a little about a lot of topics? The machine becomes your research assistant, helping you learn more about each one. But you have to know enough to ask the right questions and catch the wrong answers. You have to understand your subject well enough to know when the AI is making things up.
This is a rare skill, even among the educated. Most people are specialists, deep in one area but shallow everywhere else. AI needs generalists — renaissance minds who can work across boundaries and spot connections. These are exactly the people companies, universities, and government agencies have been pushing out for years, replacing them with narrow experts and rigid processes.
Now those same organizations face a problem. They've built workplaces around specialized knowledge, discrete tasks, strict procedures. But AI works best with fast, flexible thinking and broad understanding. The tools amplify human intelligence but can't replace human judgment and taste.
A content creator3 I know can craft 15-20 pieces of corporate content a week with AI assistance. But he still has to guide the machine, shape its output, verify its claims. The AI speeds up his work but doesn't do his job. It's like having a very fast, very thoughtless assistant — useful only if you know how to manage it.
This creates a paradox. AI might reduce the total number of jobs, but the remaining jobs will require more skill and discernment, not less. You'll need people who can think across disciplines, spot patterns, separate truth from fiction. The exact opposite of the narrowly-focused specialists organizations have been breeding.
Most will try to avoid this by using AI to create pure slop — content meant to trick search engines or pad out websites. They'll feed this artificial content back into other AIs, creating an endless loop of machine-generated noise. This is the path of least resistance, the shortcut around the need for human intelligence, and I wouldn’t be surprised if this ends up being the final shape of the river.
But anywhere the output matters — anywhere real humans need to read, understand, and act on the information because shareholder or endowment value is at stake — you'll still need smart people in the loop. People who know enough to ask the right questions and catch the wrong answers. People who can think broadly and deeply at the same time.
This is the hidden cost of artificial intelligence, the tax most companies aren't ready to pay. They want the benefits of AI without investing in the human intelligence needed to use it properly. They dream of replacing knowledge workers entirely, but end up needing smarter ones instead.
The machine doesn't care if you're stupid. But reality does. Feed garbage into an AI, you get garbage back — garbage that looks perfectly plausible until it breaks something important. The tool is only as good as the mind wielding it. That's the trap waiting for everyone rushing to embrace AI without understanding its limitations.
Smart people will use these tools to become more productive, expanding their reach and amplifying their capabilities. Everyone else will either need to get smarter or get left behind. The machine doesn't grade on a curve. It doesn't adjust for ignorance. It simply amplifies whatever you feed it - wisdom or foolishness, knowledge or nonsense.
This is why AI will disappoint many people, especially bosses dreaming of fully automated workplaces ("do Jerry’s job," "bake me a cake" — this without the supposedly busy big boss bothering to check AI Jerry’s work or even tasting the cake). They imagine machines that can truly think and reason, rather than tools that, at least for now, merely amplify human thought and reasoning. They want artificial general intelligence but so far have only narrow AI — incredibly fast but fundamentally limited.
The truth is simpler but harder to accept: AI can make a few clever people more efficient but doesn't make anyone smarter.4 It's a performance enhancer, not an equalizer. The technology taxes our intelligence rather than replacing it. This is the reality companies must face as they rush to embrace artificial intelligence. The machine doesn't care if you're stupid. But the CEOs looking to squeeze the last God almighty dollar out of their firms probably should.
I can’t say enough about this. AIs can now absorb large bodies of text you’ve already vetted (transcripts, etc.), which will limit the risk of fabrication. Most AIs will simply make up quotes from the mayor if you ask it to analyze “the speech on April 23, 2023.” Even if you provide a URL, this is still possible. It’s a bit more time-consuming, but you need to build a big, detailed prompt (there’s a good library of prompts here) to effectuate this kind of error-free work.
Unless we assume everything around us is mere lorem ipsum, filler to pass a long day’s journey into night. And maybe it is, maybe it is!
It’s “ya boi” here.
The slop is arguably making everyone dumber, and as I’ve written elsewhere, it’s one possible path forward (the likeliest one, alas).
Love this, as it touches on AI's fundamental inability to do that which distinguishes good art from bad: Creating with intention, making deliberate choices in service of a holistic effect.
This is very, true.