I use artificial intelligence every day, and I pay for it. Without it, I couldn’t create the magazine I’m about to launch. In that sense, it reminds me of Airbnb in 2018: without it, I wouldn’t have been able to travel across the United States the way I did, and the experience would almost certainly have been poorer.
That said, AI has never once struck me as intelligent. It clearly performs intelligence, much like two extraordinary people I know — among the most enjoyable companions imaginable — who can spin anecdotes endlessly, yet whom I wouldn’t consider intelligent. To put it bluntly, I think my chickens are smarter than they are.
Noam Chomsky’s critique of artificial intelligence is compelling, though not entirely satisfying. Compelling because human intelligence is not simply the accumulation of information, nor can it exist without some kind of ethical framework. When I think about the most intelligent people I’ve met, many were barely educated, had traveled little, and had never touched a computer.
To me, intelligence resembles consciousness. Scientists seem convinced that if we map the neurons of the brain closely enough, consciousness will emerge; likewise, if we feed enough data into a machine and refine the probabilistic logic, intelligence will appear. But I suspect both consciousness and intelligence are inseparable from lived experience. They belong only to sentient beings. Cognitive experience remains inaccessible to scientific determinism.
So should I start calling Claude a stochastic parrot? I’m not sure. Semantics have always seemed essential to me. That’s precisely why I keep searching for the right term: something that acknowledges AI’s usefulness without mistaking it for intelligence. Useful, yes — but perhaps mainly because modern life has become an endless competition to complicate itself.
Clutter Killer? Signal Booster? Complexity Reducer?
The search for the proper name continues.
The Airbnb analogy does real work: it draws a clean line between “tool that unlocks something you couldn’t do otherwise” and “proof of intelligence.” That distinction is worth keeping — though it quietly sets up a problem you don’t quite solve.
The Chomsky framing is the essay’s most interesting move, and you’re right that it’s more compelling than satisfying. But the reason it’s unsatisfying matters: Chomsky’s argument is essentially structural — intelligence requires an innate generative grammar, not just pattern accumulation — and you mostly sidestep that in favour of a different claim: that intelligence requires lived experience, that it belongs to sentient beings. That’s a defensible position, but it’s doing more work than you’re giving it credit for. It’s also not obviously true. When you say your chickens are smarter than two brilliant raconteurs you know, you’re invoking something like practical judgment or situational awareness — which is interesting, but it’s a different argument from the consciousness-is-inaccessible-to-determinism one you land on later. The essay runs three threads — performative vs. genuine intelligence, Chomsky’s structuralism, consciousness as irreducibly experiential — without quite braiding them.
The name-search at the end is charming and honest, but it risks doing the wrong job. “Complexity Reducer” describes what AI does for you, not what it is. That’s a function, not a nature. Which might actually be the point you’re circling: maybe the right name is user-relative. “Stochastic parrot” fails because it’s condescending about the output; your alternatives fail because they’re only about utility. What you seem to want is a term that holds both — something useful that isn’t intelligent — without collapsing into either dismissal or hype.
Is there a word for that? Or is the absence of a good word the actual argument?

Leave a Reply