Using a Supercomputer to Microwave a Potato
How we're wasting the most powerful thinking tool ever built on writing LinkedIn posts
When I was about twenty-five, I tried to understand my pension. Not because I was particularly responsible, but because someone told me I was meant to have one, and apparently I did, somewhere, possibly. I got the paperwork out - about forty pages of dense financial terminology that might as well have been written in ancient Sumerian - and attempted to parse phrases like “defined contribution scheme”, and “annuity rate”, and “compound interest over the vesting period.”
I understood individual words. “Compound” is when something gets worse. “Interest” is when you care about something. “Rate” is how fast something happens. Put them together in that specific order and my brain just slid off the page like a fried egg off a non-stick pan.
I gave up after twenty minutes and shoved it all back in the drawer, where it remained for another decade. Not because I was stupid, but because I didn’t have a framework to make sense of the complexity. The information was there, I just had no way to organise it into something my brain could actually use.
The Framework We’re Ignoring
Which is exactly what I think about every time I watch someone use AI to write a fucking email.
We’ve been given access to this extraordinary tool that can actually process genuine complexity - the kind of layered, multifaceted problems that would’ve taken a team of specialists weeks to untangle - and we’re using it to generate three paragraphs thanking Derek from accounts for his input on the quarterly review. It’s like buying a supercomputer and using it to microwave potatoes.
The potential of AI isn’t replacing human thinking. It’s giving us the framework to attempt thinking that was previously too complex, too time-consuming, or too expensive to even try.
It’s the instruction manual I never had for my pension, except for basically everything.
And we’re wasting it on tweets.
The Complexity We’re Not Tackling
We’ve been given a tool that could help us understand genuinely difficult things - complex legal documents, medical research, financial planning, technical documentation, strategic decisions with multiple variables - and instead we’re using it to automate the easiest, most brainless tasks we do.
Email drafts. Social media posts. Meeting summaries. Slightly rephrasing the same presentation we’ve given twelve times.
I know people who’ll spend twenty minutes prompting ChatGPT to write a three-sentence email response, but won’t spend five minutes asking it to help them understand the actual complex problem their job requires them to solve.
They’ll use it to generate vapid LinkedIn content about “leveraging synergies” but not to break down the fifty-page contract they’re about to sign. They’ll ask it to write tweets but not to help them actually learn something difficult.
It’s not that writing emails is bad. It’s that we’re using a tool designed to tackle complexity on tasks that don’t require any complexity at all.
We’re automating the trivial and ignoring the transformational.
What It’s Actually Good At
Here’s what AI can do that’s genuinely useful. It can take something enormously complicated and give you a framework to understand it. Not do the thinking for you, but organise the complexity so you can actually think about it properly.
Medical research papers written in impenetrable jargon? It can translate that into language you can actually parse, explain the methodology, point out the limitations, help you understand what the findings actually mean.
Legal contracts full of deliberate obfuscation? It can break down each clause, explain the implications, highlight the bits you should actually pay attention to.
Technical documentation that assumes you have three degrees and a working knowledge of ancient Greek? It can build you a mental model of how the system works.
The point isn’t that it replaces expert judgment. It’s that it gives you the scaffolding to attempt understanding things that were previously locked behind specialist knowledge. It’s the difference between staring blankly at pension paperwork and actually being able to make an informed decision about your own financial future.
But that requires effort. It requires you to engage with complexity rather than avoid it. And apparently, it’s much easier to just generate another email about “touching base.”
Automation Stupidity
We’ve fallen into this stupid pattern where we use AI to automate the things we can already do easily, while continuing to avoid the things that are actually hard. Because automation feels productive. It feels like you’re using the technology. You can point at your email drafts and your social media posts and your meeting summaries and say “look, I’m being efficient.”
But you’re not being more capable. You’re just being slightly faster at being exactly as capable as you were before.
The real question isn’t “can AI write this email for me?” It’s “what could I understand or accomplish with AI that I literally couldn’t do before?” And the answer to that question requires you to actually identify something complex and difficult and then spend time wrestling with it, which is much less immediately satisfying than watching ChatGPT generate a tweet.
I watched someone spend an hour getting AI to write increasingly elaborate out-of-office messages with jokes in them. An hour. On an out-of-office message. That same hour could’ve been spent understanding something genuinely useful - how their mortgage actually works, what their employment contract actually says, how to properly evaluate that business proposal they’re meant to be reviewing.
But the out-of-office message feels like a win because you can see the output immediately. Understanding your mortgage is hard and boring and doesn’t give you that little dopamine hit of “ooh, the computer wrote something.”
What To USe AI For
Use it to understand things you’ve been avoiding because they’re too complex. That contract you signed without reading. That medical diagnosis you nodded along to without really understanding. That financial product someone’s trying to sell you. Upload it, ask specific questions, and get it broken down into component parts you can actually evaluate. Not as a replacement for expert advice, but as a way to ask better questions when you do talk to an expert.
Use it to build mental models of complex systems. How does the planning permission process actually work? What are the actual mechanics of how pension contributions compound over time? How does this technical system I’m meant to be using actually function under the surface? Ask it to explain not just what happens, but why it works that way. Build yourself a framework.
Use it to evaluate decisions with multiple variables. You’re trying to decide between three job offers with different salaries, benefits, locations, and career trajectories. Or you’re planning a major purchase with a dozen different factors to weigh. Get it to help you map out the actual trade-offs, not make the decision for you, but structure your thinking so you’re not just going with whichever option feels nicest.
Use it to actually learn difficult things. Want to understand how the immune system works? How Renaissance painting techniques evolved? How compilers actually turn code into machine instructions? Don’t just ask for a summary. Ask it to explain the foundational concepts, build on them progressively, test your understanding, point out what you’re missing. Use it as an infinitely patient tutor for things that would otherwise require expensive courses or years of self-study.
The pattern is: identify something genuinely complex that you’ve been avoiding or delegating or just accepting you’ll never understand, then use AI as a framework to make it comprehensible. Not to do the thinking, but to organise the complexity so you can think properly.
The Effort Problem
Here’s why people don’t do this. It requires actual cognitive effort. You have to identify something hard, admit you don’t understand it, and then spend time wrestling with the explanation until it makes sense. That’s work. Real work. The kind that makes your brain tired.
Writing emails with AI doesn’t make your brain tired. It makes you feel clever without having to be clever. And that’s much more appealing than actually expanding your capability.
But if you only ever use AI for things you could already do, you haven’t actually gained anything except a slightly faster way to be exactly as capable as you were yesterday. The real value is in using it to become more capable. To understand things that were previously beyond your reach. To make better decisions because you can actually parse the complexity instead of just guessing.
It’s the difference between using a car to drive in circles around your house, and using it to actually go somewhere new.
The Potato In The Supercomputer
We’ve got a tool that can take genuinely complex information and make it comprehensible. Not simple, but comprehensible. There’s a difference.
And we’re using it to write out-of-office messages with jokes in them.
That’s not the AI’s fault. That’s us choosing to microwave potatoes when we could be doing something actually transformational. The tool’s fine. We’re just shit at knowing what to use it for.
