The Next Election Will Be Fought With Fake Videos
Legislation moves at the speed of Victorian plumbing while deepfake technology evolves like a caffeinated ferret on a treadmill
The Deepfake Election Is Already Here
I watched a mate try to convince his elderly mother that the video of her favourite celebrity endorsing crypto wasn’t real. She’d already sent £300 before he could stop her.
The celebrity had looked right into the camera, used her actual voice, and promised guaranteed returns. His mum kept saying “but I saw her say it” like that was supposed to mean something anymore. It took three cups of tea and a phone call to her bank before she finally accepted that seeing isn’t believing, not anymore.
That was a year ago ago. The technology’s got considerably better since then, and the people using it have got considerably worse.
The Death of Truth
We’re living through the death of recorded truth, except nobody’s bothered to tell the general public yet. Somewhere between 2023 and now, AI video generation crossed a threshold that most people haven’t noticed: it became easier to create convincing fake footage than it is to verify real footage.
A councillor in Gloucester, UK, recently made a fake video of his mayor saying he wouldn’t investigate missing council funds, then defended it by comparing it to his rugby mates photoshopping him into a bikini. The Council leader called it ‘psychological bullying’. Everyone demanded new laws. Then they all moved on, because that’s what we do now. We get momentarily outraged, wait for someone else to fix it, then forget about it when the next scandal arrives.
But by the time you’re demanding regulation, you’ve already lost. The technology’s already out there. The tools are already accessible. And the people who’ll use them are already several steps ahead of whatever Parliamentary committee is forming to “investigate the issue.”
The Mechanism Behind the Threat
The barrier to creating deepfakes used to be technical expertise. You needed proper kit, rendering time, and at least a passing understanding of how video editing worked.
Now you need a laptop and ten minutes. The AI does everything else. You upload a photo, type what you want them to say, and the algorithm generates the video. It’s not perfect, but it’s convincing enough to fool most people scrolling through social media at half past ten on a Tuesday night.
The economics have changed as well. Creating disinformation used to be expensive. Now it’s practically free. Which means we’ve gone from “state actors might deploy deepfakes to influence elections” to “any idiot with an internet connection can generate fake footage of their political opponents.” We’ve democratised the technology without democratising the understanding of what that technology can do.
I’ve watched this pattern before in corporate settings. Someone discovers a powerful tool, uses it without thinking about consequences, and by the time the damage becomes apparent, it’s too late to undo. Except when it’s spreadsheet errors, you lose money. When it’s deepfakes in politics, you lose the ability to distinguish between what actually happened and what someone wanted you to think happened.
Why Legislation Won’t Save You
Right, here’s the bit that’ll make you properly miserable, delivered with grim certainty: waiting for laws to protect you is like waiting for a bus that’s never coming.
Not because politicians don’t want to regulate deepfakes. They do. Everyone’s very concerned. There are committees and consultations and strongly worded statements about protecting democracy. But the legislative process moves at the speed of Victorian plumbing, and AI video technology is evolving at the speed of a caffeinated ferret on a treadmill.
By the time Parliament agrees on what constitutes an illegal deepfake, the technology will have advanced three generations. They’ll be legislating for the problem that existed eighteen months ago while ignoring the problem that exists right now. It’s the same reason we’re still arguing about cookie consent banners while AI is generating convincing fake videos of world leaders declaring war.
And even if they manage to pass sensible legislation, enforcement is a different nightmare entirely. How do you prosecute someone for creating a deepfake when proving the video is fake requires expert analysis that takes weeks? How do you stop the damage when the fake video goes viral before anyone can verify it’s fake? The harm happens immediately. The legal remedy happens eventually, but that’s paperwork, not protection.
The rules always lag behind the technology, and the people willing to break the rules always have an advantage over the people waiting for clarity. If you’re waiting for legislation to tell you how to respond to deepfakes, you’re already six months behind the threat.
What You Actually Need to Do
Stop waiting for someone else to solve this. I’m serious. The government isn’t coming to save you. The platforms aren’t going to moderate their way out of this. Your media literacy curriculum from 2015 is not equipped for the reality of 2026. You need to build your own defences, and you need to do it now.
First, develop extreme scepticism about video content from social media. I don’t mean healthy scepticism. I mean the kind of paranoid mistrust you’d apply to a bloke selling genuine Rolex watches out of a suitcase at Victoria Station.
If it makes you angry, suspicious, or confirms what you already believe, assume it’s designed to manipulate you until proven otherwise. The burden of proof has shifted. Real content now has to prove it’s real. That’s backwards and depressing, but it’s where we are.
Second, apply the source verification test before sharing anything. Who posted this? What do they gain from you believing it? Is there corroborating evidence from multiple independent sources? If you can’t answer those questions, don’t amplify it. Every share, retweet, and forward gives credibility to content that might be completely fabricated. Be cautious and be responsible.
Third, teach the people around you to think like this. Your parents, your mates, your colleagues who still think Facebook videos are verified news. They’re the most vulnerable because they grew up in a world where video evidence actually meant something. That world is gone. Help them understand that before they send money to “CryptoBollock Investments” or share a fake video of a politician saying something inflammatory.
Fourth, demand that organisations you’re part of establish clear policies now. Your workplace, your council, your professional bodies. If they don’t have explicit rules about AI-generated content, they’re not prepared for what’s coming. Don’t wait for a crisis to force the conversation. Have it now, while you still have time to think clearly instead of reacting desperately.
The Scale of What’s Coming
The Gloucester council is a preview, not an aberration. This is what happens when powerful technology becomes accessible to everyone. And it won’t be the last. It won’t even be the worst.
We’re heading into election cycles where every candidate can be made to say anything. Where every policy announcement can be fabricated. Where every scandal can be synthesised. The technology isn’t just good enough to fool casual viewers. It’s good enough to fool journalists operating under deadline pressure. It’s good enough to fool fact-checkers who don’t have time for forensic analysis.
And the people creating this content don’t need to fool everyone. They just need to fool enough people to shift perception. A fake video doesn’t need to be believed by 100% of viewers. It needs to be believed by the 15% of people whose votes decide elections. It needs to circulate long enough to damage reputations. It needs to create enough confusion that truth becomes negotiable.
I’ve watched people make catastrophic decisions based on incomplete data they didn’t bother to verify. The pattern is always the same: someone presents compelling evidence, nobody questions the source, and by the time the truth emerges, the damage is irreversible. We’re about to see that same pattern applied to democratic processes, except instead of bad quarterly results, we get bad governments elected on the basis of fabricated reality.
The Reality Check
The depressing truth is that most people won’t change their behaviour until they personally get burned. They’ll keep trusting video evidence until their own face appears in a deepfake saying something they never said. They’ll keep sharing inflammatory content until someone uses the same tactics against them. Human beings are brilliant at understanding problems in the abstract and terrible at applying that understanding to their daily habits.
I know this because I’ve been that person. I’ve shared content without verification. I’ve believed video evidence because it confirmed my biases. I’ve been part of the problem, and the only reason I stopped is because I saw the damage firsthand. Most people won’t get that wake-up call until it’s too late.
So here’s my final bit of pessimistic advice: assume the next election will be fought with deepfakes. Assume your social media feeds will be flooded with synthetic content designed to manipulate your emotions. Assume that video evidence means nothing without corroboration. Build your scepticism now, while you still have the luxury of preparation, because once the flood starts, you won’t have time to develop critical thinking skills.
Back to My Mates Mum
My mate’s mum never got her £300 back. The bank said she’d authorised the transfer herself, which was technically true. The celebrity never endorsed anything, but the video looked real enough to convince someone who’d spent seventy years living in a world where seeing was believing.
She still watches videos of that celebrity. She just doesn’t trust any of them anymore. Which is a solution, I suppose, but it’s the kind of solution that involves throwing out the entire concept of video as evidence because we can’t reliably distinguish real from fake.
The technology isn’t going away. The people willing to misuse it aren’t going to suddenly develop ethics. And your government isn’t going to legislate fast enough to protect you from what’s already here. The deepfake election has already started. You just haven’t noticed yet.
It’s not safe. It hasn’t been safe for a while. And waiting for permission to be paranoid is just another way of staying vulnerable.
