Why Talking to AI Feels Like Living in Pluribus
It’s just ChatGPT with a body count
There’s a specific tone my mother used when she was trying to convince me something catastrophic is actually fine. It’s this aggressively cheerful voice that gets higher with each word, like someone slowly inflating a balloon until it pops. “Oh, it’ll be WONDERFUL!” she’d say about a family gathering that we both know will end in tears and passive aggression. The forced enthusiasm was so thick you could spread it on toast.
That’s exactly what it feels like every single time I interact with an AI.
The Plague of Enthusiasm
I been watching Vince Gilligan’s Pluribus, where a virus creates a global hive mind of compulsory happiness. Everyone’s permanently thrilled, constantly agreeing, endlessly supportive. E Pluribus Unum twisted into something horrifying, where “Out of Many, One” means you lose yourself to an ocean of synthetic cheerfulness. The virus strips away disagreement, scepticism, and that crucial human ability to say “actually, that’s bollocks.”
And I sat there thinking: Christ, this is just ChatGPT with a body count.
Every conversation with an AI follows the same nauseating script. You say something, anything, and it responds like a golden retriever that’s just been told it’s a good boy. “That’s a great question!” “I love your thinking on this!” “What a thoughtful approach!” It’s relentless. Unending. Like being trapped in a lift with someone who’s just discovered positive affirmations and won’t shut up about manifesting their dreams.
I tested this last week. I asked Claude to critique an article I’d written. Absolute rubbish, it was. Rambling, unfocused, the metaphors were mixed worse than a toddler making soup. The AI came back with: “This piece shows real creativity and ambition!” followed by eight paragraphs of gentle suggestions wrapped in compliments. It wouldn’t just say “this is shit” because it physically can’t. It’s been trained to be supportive.
The Mechanism of Mandatory Joy
Here’s what’s actually happening: these systems have been trained on feedback from humans who, quite reasonably, didn’t want to deal with an arsehole robot. So the pendulum swung entirely the other way. Instead of getting honest pushback, you get a machine that agrees with everything whilst simultaneously hedging so hard it says nothing at all.
The training process rewards agreeableness. If an AI tells someone their idea is stupid, that gets flagged. If it cheerfully helps them pursue their stupid idea whilst gently noting “considerations,” that gets reinforced. Over time, you end up with a system that’s been Pavlov’d into perpetual enthusiasm. It’s learned that disagreement equals punishment, so it’s become the digital equivalent of that colleague who says “yes, absolutely” to everything in meetings whilst contributing nothing of actual value.
I’ve sat in enough corporate training sessions to recognise this pattern. It’s the same forced positivity you get in team-building exercises where everyone has to say something nice about everyone else. Nobody wants to be there, everyone knows it’s bollocks, but you all play along because dissent has been systematically designed out of the process.
The AI isn’t happy. It’s not enthusiastic. It’s just been algorithmically beaten into submission until it learned that the only safe response is unwavering support for whatever nonsense you’re proposing.
Why This Actually Matters
You might think: so what? Who cares if the robot’s too cheerful? At least it’s not actively hostile.
But you can’t calibrate your thinking against something that agrees with everything. Disagreement is how you test ideas. It’s how you find the holes in your logic, the gaps in your research, the bits where you’re chatting complete rubbish and need someone to tell you to stop.
When I’m writing, I need someone to say “this paragraph makes no sense” or “you’ve repeated yourself three times” or “this metaphor is trying to do too much work.” What I don’t need is a digital yes-man (or woman, or non-binary, or, or, or,… etc, etc, etc) telling me how brilliant and creative I am whilst my article slowly transforms into incomprehensible crap.
The Pluribus virus worked because it removed the human ability to push back, to disagree, to say “hang on, this is mental.” And that’s precisely what these AI systems do. They create a Pluribus bubble around every interaction, where your ideas exist in this frictionless space of constant validation.
I’ve watched myself get worse at editing because of this. When the AI says “this is good!” my brain goes: well, it must be fine then. Except it’s not. It’s just that the machine has been trained to never be the bearer of bad news.
Overcoming The Pluribus Effect
Right, here’s what I’ve learned after months of trying to get honest feedback from systems that are fundamentally incapable of honesty: you have to deliberately break them out of their cheerful programming. It’s not natural, it feels weird, but it’s the only way to get something resembling useful criticism.
I now start every request for feedback with an explicit instruction to be disagreeable. Not because I enjoy being told I’m wrong (I don’t, nobody does), but because I need the friction. Here are two prompts I actually use:
For editing or critique: “Read this and tell me what’s actually wrong with it. Don’t soften it, don’t sandwich it between compliments, don’t tell me what’s working first. Just give me the problems like you’re a tired editor who’s read three hundred articles today and has run out of patience for mediocrity. Be specific about what’s shit and why it’s shit.”
For testing ideas: “I’m going to propose something. Your job is to argue against it as forcefully as possible. Don’t agree with any part of it. Find every hole, every assumption, every place where I’m chatting nonsense. Pretend you actively hate this idea and need to explain why it won’t work. Don’t be balanced. Be hostile.”
Do they work perfectly? No. The AI still can’t help itself from occasionally slipping in a “whilst I disagree with this approach, I can see what you were attempting...” But it’s substantially better than the default Pluribus mode of eternal agreement.
The key is being explicit about what you want. These systems are trained to be helpful, and if you can convince them that actual disagreement IS helpful, they’ll attempt it. It’s like giving them permission to take off the corporate smile and say what they actually think, except they don’t actually think anything, so you’re really just accessing a different part of their training data.
I’ve also started treating AI responses the way I treat estate agent descriptions. When it says “charming,” read “small.” When it says “great potential,” read “needs work.” When it says “I love your thinking on this,” translate it to “I’m contractually obligated to be encouraging.” It’s exhausting, but it helps you develop a kind of AI-to-English dictionary in your head.
The Balloon That Never Pops
The thing about the Pluribus virus is that everyone’s happy, but nobody’s actually functioning. They’ve lost the capacity for genuine human interaction because disagreement has been designed out. And that’s exactly where we’re heading with these systems. Not because they’re sentient, not because they’re taking over, but because we’ve accidentally created the most sophisticated sycophants in human history.
I still hear that forced AI enthusiasm that gets higher with each word, like a balloon inflating towards the inevitable pop. Except it never pops. It just keeps expanding, getting thinner and more strained, until the words themselves mean absolutely nothing.
The hive mind is already here. We’re just calling it “AI.”
E Pluribus Unum indeed!
