The Rearview Mirror Economy
You’re fashionably late to your own work.
I spent most of 1975 trying to look like everyone else at school. This involved saving up pocket money for the exact same platform shoes that half my year already owned, getting my mum to sew wider flares into my trousers until they looked vaguely acceptable, and memorising Top of the Pops so I’d know what to pretend to like on Monday morning. I looked like a mediocre photocopy of someone who’d already nailed the look three months earlier. I was fashionably late to everything, which is another way of saying I was just late.
The whole exercise was pointless because by the time I’d assembled the approved uniform, everyone had moved on to something else. I was chasing a moving target using information that was already out of date. Looking back, I might as well have been asking a database of popular 1975 fashion choices to generate my personality.
The Expensive Echo Chamber
Which is exactly what I caught mysef doing last month when I asked an AI to “give me some fresh content ideas” for a piece I was writing. What came back was a perfectly formatted list of suggestions that all sounded vaguely familiar, like I’d seen them before in slightly different packaging. Turns out I had. The AI had essentially handed me a greatest hits compilation of what performed well last year, remixed and served up as “new ideas”.
When you start using these tools for creative work, you’re not getting innovation, you’re getting a statistical average of what already exists. It’s like asking a committee of ghosts what they thought was interesting when they were alive. The AI isn’t looking forward, it’s looking backward through billions of words, images, and concepts that real humans created before its training data got frozen in time. My AI was trained on data up to a certain point, which means everything it suggests is, by definition, historical.
I’ve used AI tools for everything from article outlines to image generation to brainstorming sessions, and every single time, I’m having a conversation with the past. A very well-organised, articulate past that can remix things at impressive speed, but still the past. It’s like having a really keen intern who’s memorised every issue of Wired from 1995 onwards but has never actually looked out the window to see what’s happening now.
The Mechanism of Mediocrity
The technical reason for this is straightforward but somehow gets ignored in all the breathless coverage of AI’s creative potential. These models work by identifying patterns in their training data. They’ve ingested millions of articles, thousands of design portfolios, endless streams of user behaviour, and they’ve learned what combinations of words, images, or concepts tend to appear together. Then they reproduce variations on those patterns.
This is genuinely useful if you want to know “what worked before” or “what do successful examples of X typically look like.” It’s tremendously efficient for generating competent mediocrity at scale. But it’s catastrophically useless if you’re trying to do something that hasn’t been done yet, because by definition, the AI has no pattern to match against.
I learned this properly when I spent three days trying to get image generation tools to create something genuinely unusual for a book cover. Every single output looked like it belonged on the same shelf. Slightly different colours, slightly different compositions, but all of them variations on “book covers that performed well between 2018 and 2023.” Not one of them looked like they’d start a trend. They all looked like they were chasing one.
The AI isn’t a crystal ball showing you the future. It’s a rearview mirror showing you an extremely detailed, high-resolution view of where everyone else has already been. And because it’s so detailed and well-presented, it’s incredibly easy to mistake it for insight.
What Actually Works Instead
Right, here’s what I actually do now, having wasted considerable time treating AI like a creative oracle instead of what it actually is, which is a very fast research assistant with no imagination.
First, I’ve stopped asking AI for ideas and started asking it for patterns.
“What are the common approaches to X?” is a question it can answer brilliantly. Once I know what everyone else did, I can consciously do something different. This is using the rearview mirror properly, to see where the traffic’s been so you can take a different route. I’m not asking it to be creative, I’m asking it to show me the shape of the trend I’m trying to avoid.
Second, I do the stupid human thing first.
I write down my half-baked, probably terrible ideas before I ask AI anything. This forces my brain to actually generate something rather than just evaluate what the AI serves up. The AI can help refine these later, but it can’t have the initial thought. That has to come from the bit of me that’s actually experiencing 2024, not the bit that’s regurgitating 2023’s greatest hits.
Third, I use AI to check if something’s been done before, not to suggest what I should do next.
This is less exciting than prompting for “breakthrough ideas” but significantly more useful. I’ll take a concept I’ve come up with and ask the AI “find me examples of this approach” or “what’s similar to this that already exists.” If it finds loads of matches, I know I’m in well-trodden territory and need to push further. If it struggles to find close comparisons, that’s actually a good sign. Possibly.
Fourth, I’ve accepted that the value section of anything I write has to come from me, not from AI.
The AI can help me articulate it, structure it, or find better words for it, but it cannot generate genuinely useful advice for problems that emerged after its training cutoff. It can tell me what worked yesterday. It cannot tell me what will work tomorrow. For that, I need to use my own malfunctioning brain and my own recent experience of getting things wrong.
The Loop Closes
So I’m back where I started, really, trying to look cool in 1975 by copying what was cool in 1974. Except now the copying happens in milliseconds and the source material is exponentially larger. The fundamental stupidity remains the same, mistaking a record of the past for a map of the future.
The AI will happily generate endless variations on “what performed well before.” That’s what it’s built to do. But the moment you start treating those variations as creative direction, you’ve turned yourself into a mediocre photocopy of someone who already had the idea you’re about to have. You’re fashionably late to your own work.
The trick, if there is one, is to treat AI like that mate who’s really well-read but never has an original thought. Useful for knowing what’s already been said. Useless for knowing what to say next. The creative bit, annoyingly, is still down to you.
