Why Your AI Tools Are Living on Borrowed Time
Why Everything You're Building on AI Will Cost Five Times More Next Year
OpenAI burns five billion annually while we all pretend the maths will magically fix itself. Here's what happens when the money runs out.
There’s a bloke in my neighbourhood who runs a car wash. Been there for fifteen years. Every Saturday, you’ll see him outside with the same knackered pressure washer, same bucket, same grotty sponges that look like they’ve seen better days since the Major government. The business makes enough to keep him in fags and pay the rent on the unit. Not glamorous, but it works. Last month, I was with him when he turned away some geeky type who wanted to “scale his operation” with an app and automated brushes. The car wash bloke looked at him like he’d just offered to set fire to his house, which, knowing how these things work, is basically what the offer amounted to.
I mention this because I’ve been reading about the financial standing of AI companies, and the numbers are so staggeringly stupid that I had to check I hadn’t accidentally opened a satirical mag.
OpenAI, the company that’s supposedly revolutionising everything, is projecting losses of fourteen billion dollars for 2026. Not fourteen billion in investment. Fourteen billion in actual losses. That’s nearly triple what they lost last year.
The Maths Still Doesn’t Maths
Apparently, OpenAI is on track for revenue of over twenty billion this year, which sounds impressive until you realise they’re planning to burn through fourteen billion more than they make. They’ve got something called “Project Stargate,” which is a hundred billion dollar data centre that needs as much power as a nuclear reactor. An actual nuclear reactor. For a chatbot.
To put this in perspective, they’re expecting cumulative losses of one hundred and fifteen billion dollars by 2029. That’s more than Uber burned through on its way to maybe becoming profitable, more than WeWork incinerated before it collapsed entirely. The Manhattan Project cost thirty billion in today’s money. The Apollo programme cost 288 billion over thirteen years. OpenAI is trying to burn through that level of cash in five to seven years, and we’re all just nodding along like this is normal business behaviour.
Anthropic, the other major player, isn’t much better. They’ve just raised their valuation to 350 billion dollars after bringing in another ten to fifteen billion in funding. Their revenue forecast for this year is eighteen billion, which sounds brilliant until you factor in that they’re spending twelve billion on training models and another seven billion just to answer questions. That’s nineteen billion in costs before you’ve paid for the building or the staff.
The Efficiency Myth
I’ve seen first-hand how companies convince themselves that if they just got big enough, the costs would somehow magically come down. The technical term for this is “economies of scale,” and it works brilliantly for product manufacturers. But it doesn’t work when your product costs more to make every time you make it marginally better.
The problem is something called “inference costs,” which is the money it takes to make the AI respond when you type something in. These costs don’t go down as you get bigger. They go up. OpenAI is spending half their revenue just keeping the lights on for existing queries, and they need to spend even more to make the next version better. Anthropic is in the same boat, just with different investors subsidising the voyage to nowhere.
The only company that’s worked this out is Google. They’ve built their own chips, they’ve got their own data centres, and they’re the only one in this whole mess that’s actually making money from AI whilst also running a profitable business underneath it all. Their cloud division hit 16.25 billion in revenue last quarter, and their margins are going up, not down. This is because they’re not paying Nvidia three times what the chips are worth, and they’re not renting someone else’s data centre at extortionate rates.
Right now, you’re benefiting from what economists call “predatory pricing,” and we, my friend, are the prey.
Why This Affects You Specifically
Right, enough diagnosis. Here’s what you need to know if you’re using any of these tools for actual work.
First, the free ride is ending, and it’s ending faster than anyone’s admitting. OpenAI is trying to raise two hundred billion dollars at an 830 billion dollar valuation. That’s more than the GDP of Argentina. They’re not raising that money because everything’s going brilliantly. They’re raising it because they’ve worked out they need it just to keep the operation running for another few years. When that money runs out, or when the investors finally lose patience, the prices are going up. Significantly.
I’ve done the maths on what these services would need to cost to actually break even, and it’s somewhere between five and ten times what you’re paying now. If you’re using ChatGPT Pro at twenty quid a month, the real cost is probably closer to a hundred and fifty. The only reason you’re not paying that is because Microsoft, Nvidia, and a collection of Middle Eastern sovereign wealth funds are subsidising your subscription whilst they fight over who gets to own the future.
Before you integrate any AI tool into your workflow, ask yourself what happens when it costs ten times more. Because that’s the actual cost. Right now, you’re benefiting from what economists call “predatory pricing,” and we, my friend, are the prey.
Second, stop assuming these companies will exist in their current form by this time next year. Perplexity, the AI search engine that’s supposed to challenge Google, is currently valued at twenty billion dollars despite making 148 million in revenue. That’s a 120-times revenue multiple. For context, normal software companies trade at maybe five to ten times revenue. Perplexity is priced like it’s going to replace Google. It isn’t going to replace Google. Google has spent the past year integrating its Gemini AI into search, and Perplexity’s own shares are already trading at a discount in private markets because investors have worked out what’s coming.
Third, watch what the companies are actually doing, not what they’re saying. Elon Musk just merged his AI company, xAI, into SpaceX because xAI was burning a billion dollars a month and making less than a billion a year. The company was valued at 250 billion dollars in the deal, which is roughly 250 times what it’s worth based on any sensible financial metric. To me, this looks like a bailout dressed up as strategic vision. When billionaires start combining their failing bets into one massive bet, that’s usually a sign that the original bets weren’t working.
The Questions You Actually Need To Ask
Here’s what I ask now, having watched this exact pattern play out in telecoms, in dot-coms, in ride-sharing, and now in AI.
Who’s actually paying for this? If the answer is “venture capital and sovereign wealth funds,” assume the pricing will change dramatically within two years. If the answer is “it’s cross-subsidised by a profitable business,” that’s safer, but expect to be locked into that company’s ecosystem eventually. Google’s the only one that fits this category, and they’ve got their own problems with antitrust regulators.
What’s my exit cost? If this service disappeared tomorrow, or if the price increased tenfold, how screwed would you be? If the answer is “very,” you need to start building redundancy now, not when they announce the price increase with thirty days’ notice buried in a blog post about “delivering enhanced value.”
Is this solving a problem I actually have? The AI companies have spent billions creating anxiety about falling behind. They’ve convinced half the business world that they need AI-powered everything, when what they actually need is someone competent using a spreadsheet. The car wash bloke doesn’t need an app. You probably don’t need whatever AI-powered workflow optimisation tool is being marketed at you this week.
The Industrial Phase
The analysis I read calls this the “Industrial Phase” of AI, which is a polite way of saying we’ve moved from the bit where clever people do interesting research to the bit where stupid amounts of money get burned trying to turn the research into something profitable. OpenAI needs ten gigawatts of power for Project Stargate. That’s the output of ten large power stations. They’re negotiating with nuclear energy providers because nothing else can deliver that much juice to one location.
When tech companies start building their own nuclear power plants, that’s got to be a sign that the economics have gone completely sideways. It’s like if the car wash bloke decided he needed his own reservoir and hydroelectric dam because he was using too much water. At that point, you’re not running a car wash anymore. You’re running an infrastructure project that happens to wash cars.
Anthropic has worked this out, which is why they’re spreading their bets across Amazon, Google, and Microsoft. They’ve signed agreements worth billions with each of them, which means they’re not locked into any single provider’s extortionate rates. This is smart, but it’s also a sign that the underlying economics are so broken that even the companies burning billions have to hedge against getting fleeced by their own suppliers.
What Happens Next
The car wash is still there, by the way. Still profitable, still boring, still using the same knackered equipment. The geeky bloke has probably already moved on to shilling the next shiny big thing. I think about that sometimes when I read that OpenAI is projecting cumulative losses of 115 billion dollars by 2029, or that Anthropic has delayed its profitability target from 2027 to 2028 because the costs keep going up faster than the revenue.
The tragedy isn’t that these AI tools don’t work. They’re genuinely useful. The tragedy is that we’ve built an entire industry on a business model that requires burning through hundreds of billions of pounds of other people’s money, and we’re all pretending that’s sustainable. It’s not sustainable. It’s mental.
Sooner or later, the money runs out, the prices go up, or the whole thing gets quietly folded into Google and Microsoft’s existing businesses whilst everyone pretends that was the plan all along. When that happens, half the tools you’ve built your workflow around will either disappear, quintuple in price, or get gutted of the features that made them useful in the first place.
The smart money isn’t betting on which AI company wins. The smart money is betting on which conventional business can survive when the AI subsidy runs out. And the really smart money is on the bloke with the car wash, because at least his business model makes sense.
