Prose-washing my writing
Bollocks, I Write Like AI
If you consume enough of this smooth, personality-free prose, does your own writing start to sand down its rough edges?
There’s a specific moment of horror that occurs when you catch your own reflection in an unexpected mirror and briefly don’t recognise yourself. Not because you’ve aged or gained weight or developed some horrible facial growth, but because the angle is wrong, the lighting is off, and for half a second you see yourself as a stranger would. It’s profoundly unsettling, that momentary disconnect between who you think you are and what’s actually staring back at you.
I had that exact feeling today when I ran some of my writing through one of those AI detection tools. Those increasingly frustrating pieces of software that promise to sniff out machine-generated text with the confidence of a sniffer dog at an airport, despite having roughly the same accuracy rate as a drunk person trying to parallel park. The result came back: 78% likely to be AI-generated.
My own writing. Stuff I’d sweated over, agonised about, rewritten six times at 2am because the rhythm wasn’t quite right. The detector looked at my words and essentially said “yeah, a robot probably wrote this mate.”
Now, I could dismiss this as the tool being rubbish, which it absolutely is. These detectors are about as reliable as a chocolate teapot in a heatwave. They flag The Wizard of OZ and Shakespeare as AI-generated. They give the all-clear to chatbot drivel that reads like it was written by a particularly dim algorithm. They’re measuring something, certainly, but what they’re measuring has about as much connection to actual AI authorship as astrology has to astronomy.
But I couldn’t shake the though - what if they’re right? Not right in the sense that I’ve been secretly replaced by a large language model, though Christ knows some days that would explain a lot, but right in the sense that my writing style has been slowly, inexorably warped by the sheer volume of AI slop I wade through every single day.
Because I do wade through it. Mountains of the stuff. Every morning starts the same way: coffee, and scrolling through enough AI-generated content to fill a small library. Articles that sound like they were written by someone who learned English from a manual. Medium posts that read like a yawn. Newsletter after newsletter of perfectly grammatical, utterly soulless prose that hits every SEO keyword.
And somewhere in that daily deluge, something happened. I’ve started to notice my own sentences getting cleaner. More structured. My paragraphs developing this weird uniformity, like they’d been through some sort of prose-washing. The rough edges I used to agonise ove, the weird rhythm, the deliberately awkward phrasing, the bits that only worked when read aloud, started smoothing themselves out.
I wasn’t trying to write like AI. Nobody wakes up thinking “today I’ll sound more like a chatbot.” But your brain is a sponge, and if you spend enough time soaking in one particular type of content, you start wringing it out in your own work. It’s linguistic osmosis. Pattern recognition run amok. Your internal voice slowly adopting the cadence of the vast majority of what you’re reading, even when that vast majority is generated by machines optimised for smoothness rather than truth.
The really insidious bit is that AI-generated text isn’t obviously bad anymore. The early stuff was hilariously awful, you could spot it from space. But the current generation has learned from billions of words of human writing, and what it’s learned is how to sound... acceptable. Professional. Clean. It’s like the prose equivalent of airport lounge furniture, technically functional, utterly forgettable, designed to offend nobody and delight nobody in equal measure.
And that’s what those detectors are actually picking up on, when they bother to work at all. Not the presence of machine generation, but the absence of human mess. The lack of weird choices, unexpected metaphors, deliberately broken rhythms, the bits where a writer goes off on a tangent because something reminded them of something else. The absence of personality, basically.
Which brings us to the genuinely terrifying question: If you consume enough of this smooth, personality-free prose, does your own writing start to sand down its rough edges? Do you start self-editing out the bits that make your voice yours, because they don’t sound “right” anymore? Because what sounds right has been recalibrated by exposure to content optimised for machine readability rather than human connection?
The mechanism here is actually quite simple, which somehow makes it worse. AI language models are trained on vast datasets of human writing, learning to predict what word should come next based on patterns in that data. They get very good at producing text that sounds like the statistical average of everything they’ve learned. Which means they excel at producing content that sounds... normal. Professional. Like everything else.
But some of the content those models are now training on was written by humans who’ve been influenced by AI content. And a sizeable chunk of it was written by AI. We’re creating a feedback loop where machine influences human influences machine, and the whole thing spirals toward some bland, optimised middle ground where nobody can tell the difference anymore because there isn’t one. We’re all slowly converging on the same statistically average voice, like some sort of linguistic entropy.
And the AI detectors are mostly measuring deviation from expected patterns. Write something too smooth, too predictable, too statistically normal, and they flag it as AI. Which would be fine, except “statistically normal” is increasingly what human writing sounds like, because we’ve been reading so much AI content that our sense of what’s normal has shifted. We’re being marked down for sounding like machines, when really we’re just sounding like humans who’ve been reading machines.
So what do you actually do about this? Because sitting here contemplating the slow erosion of your writing voice while the heat death of linguistic individuality approaches is all very intellectually interesting, but it doesn’t help you write something that sounds like you rather than like ChatGPT’s idea of you.
First, and this is going to sound obvious but apparently needs saying: stop reading so much AI slop.
I know this is easier said than done, especially if your job requires you to monitor AI developments, consume tech news, or wade through the endless river of content that flows through social media. But you need to actively seek out human voices. Real ones. Writers who make weird choices. People who break rules deliberately. Stuff that sounds like an actual person wrote it because they had something specific to say, not because an algorithm suggested it might perform well.
For me, this meant going back to actual books. Physical objects containing words written by humans who were dead before the first computer was switched on. Brooker, obviously. But also Orwell, Hunter S. Thompson, Dorothy Parker. People whose writing has lumps in it. Whose sentences don’t always land perfectly but land exactly where the writer wanted them. It’s like palate cleansing, except for your brain.
Second, and this requires a level of self-awareness that’s frankly exhausting: start noticing when you’re self-editing for smoothness rather than clarity.
There’s a difference between rewriting something because it genuinely doesn’t communicate what you mean, and rewriting it because it doesn’t sound “professional” enough. That second impulse? That’s the AI talking. That’s you trying to sand down your rough edges because they don’t match the statistically average prose you’ve been consuming. Fight it. Keep the lumps.
I started keeping a “weird cuts” document where I paste all the sentences I delete for being “too much.” Half of them are genuinely rubbish and deserved deletion. But the other half? They’re the bits that actually sound like me. The metaphors that go too far. The sentences that meander. The deliberate awkwardness. Those are the bits worth keeping, even when - especially when - they make you slightly uncomfortable.
Third: write drunk, edit sober, but also edit drunk occasionally.
Not literally, though God knows it’s tempting. But give yourself permission to write first drafts that are deliberately rough, weird, too much. Turn off the part of your brain that’s been trained by reading AI content. Write like nobody’s going to run it through a detector. Write like you’re sending an angry email to a friend at 3am. Then, when you edit, keep more of that mess than you think you should.
The practical reality is this: AI detectors are mostly useless and will remain useless, so optimising your writing to avoid them is a waste of time. But the deeper concern, that your voice is being eroded by exposure to machine-generated content, that’s real. And the only defence is active resistance. Deliberate weirdness. Conscious choice to sound like you rather than like the statistical average of everything.
Does this mean I’ll never get flagged by an AI detector again? Absolutely not. Those things are rubbish. But it does mean I’m less likely to catch my reflection in the mirror and not recognise the writer staring back. Less likely to smooth myself into algorithmic acceptability. More likely to sound like someone who learned to write from humans, not from machines trained on humans trained on machines.
Because ultimately, that’s what this is about. Not whether some dodgy software thinks you’re a robot, but whether you’ve accidentally started writing like one. Whether the rough edges that make your voice yours have been polished away by exposure to content optimised for machines rather than humans. Whether you’re still in there, under all that smoothness, or whether you’ve been quietly replaced by your own statistical average.
I’d like to think I’m still in here. But some days, when I read back what I’ve written and it sounds just a bit too clean, a bit too professional, a bit too much like everything else, I wonder. And that’s the real horror, not that a detector flagged me, but that it might have had a point.
