Ken, Gary, and the Death of Customer Service
They told us chatbots would save time. They lied.
There’s a specific type of claustrophobia that only occurs in supermarkets, and it happens at the self-service checkout when the machine decides you’ve committed some unspecified crime against retail.
You scan your packet of digestives, and the screen freezes. A robotic voice announces “unexpected item in bagging area” with the same warmth as a coroner identifying your body. You remove the biscuits. You place them back. Nothing changes.
The machine doesn’t care. It’s not programmed to care. It’s programmed to make you wait for Gary from Customer Services, who’s currently dealing with someone else’s bag-for-life crisis three checkouts over.
You stand there, clutching your digestives like a guilty shoplifter, while the queue behind you radiates pure hatred. The machine blinks at you. “Please wait for assistance.” It’s not a request. It’s a command. And you’ll bloody well wait because Gary’s the only human being in this entire transaction who’s authorised to press the magic button that acknowledges your existence as something other than a potential thief.
That’s the future of customer service, except Gary’s been replaced by a chatbot called Ken, and Ken’s even less use than the self-service checkout because at least the checkout eventually summons a human. Ken just loops you back to the start.
Enter Ken, Your Personal Obstacle
Which is exactly what happened to me three weeks ago when I tried to cancel a subscription service I’d forgotten I was paying for. Not a massive subscription, just £8.99 a month for a meditation app I’d downloaded during a brief period of optimistic self-improvement way back in September. By November, I’d meditated precisely twice, both sessions interrupted by my own thoughts about how much I was wasting my time sitting cross-legged pretending to achieve inner peace when I could be doing literally anything else.
So I went to cancel it. Simple, right? Except there’s no cancel button. There’s never a cancel button. There’s a “help” button that launches you into a conversation with Ken, who greets me with the enthusiasm of a dental receptionist.
“Hello! I’m Ken, your personal assistant. How can I help you today?”
Personal assistant. That’s the first lie. Ken’s not my personal assistant. Ken’s a script written by someone who’s never met me, designed to protect the company’s revenue by making cancellation as appealing as chewing glass. But fine, I’ll play along.
“I want to cancel my subscription.”
“I’m sorry to hear you’re thinking of leaving! 😔 Can you tell me more about what’s not working for you?”
Notice the emoji. That’s calculated. Some designer spent actual time deciding which emoticon best conveys synthetic sadness. They settled on that one because it looks like Ken’s personally devastated by my decision, as if we’ve shared years of meaningful friendship and I’m now abandoning him to pursue a life of chaos and unmanaged stress.
I don’t want to tell Ken what’s not working. I want to cancel. But Ken’s been programmed to make me explain myself, like I’m asking permission to leave a cult.
“It’s not for me. I’d like to cancel.”
“I understand! Have you tried our new guided breathing exercises? Many users find them really helpful for stress management. Would you like me to set up a personalised plan?”
No. No, I would not. I don’t want a personalised plan. I want out. But Ken’s not listening because Ken’s not capable of listening. Ken’s following a flowchart designed by someone in San Francisco who attended a workshop called “Reducing Churn Through Conversation Design” and came away convinced that if you just ask users enough questions, they’ll forget they wanted to leave in the first place.
This continues for seventeen minutes. Seventeen actual minutes of my finite existence on Earth, spent explaining to a piece of software that I want to stop paying for something I’m not using. Ken suggests alternatives. Ken offers discounts. Ken asks if I’ve tried the sleep sounds feature. Ken does everything except the one thing I’ve asked him to do, which is cancel my bloody subscription.
Eventually, buried in Ken’s eighth response, there’s a link. “Still want to cancel? Click here.” I click it. It takes me to a form. The form asks why I’m cancelling and presents me with a dropdown menu of pre-approved reasons, none of which include “because I’m talking to a chatbot called Ken and I’d rather eat my own shoe than continue this conversation.”
The View from the Other Side
This is the chatbot lie. Not that chatbots don’t work, they do work, magnificently, but not for you. They work for the company. They work to deflect, delay, frustrate and ultimately exhaust you into either giving up or downgrading to a cheaper plan that still keeps you in the system. They’re the digital equivalent of those maze experiments where they put a rat in a box and make it press levers for food, except you’re the rat and there’s no food, just the faint hope that if you press enough buttons, you might eventually be allowed to leave.
I’ve seen this from the inside. Years ago, when I was still pretending to be a responsible adult with a proper job title, I sat in meetings where people discussed “customer retention strategy” as if it were a noble pursuit rather than a cynical attempt to make leaving so unpleasant that people would rather keep paying than deal with the hassle. We measured “deflection rates” with the same enthusiasm that other people measure their children’s height.
The deflection rate, if you’re lucky enough not to know, is the percentage of customers who start a conversation intending to cancel but end up not cancelling. A high deflection rate is considered good. It means the system’s working. It means Ken’s doing his job. It means people are giving up.
Nobody in these meetings ever said “we’re making it deliberately difficult to cancel.” That would sound evil. Instead, we said things like “we’re providing opportunities for customers to explore alternative solutions” or “we’re ensuring customers make informed decisions.” Informed decisions. As if the decision to stop paying for something requires the same level of consideration as buying a house or choosing a life partner.
How Ken Actually Works
The mechanism behind this is breathtakingly simple. You build a chatbot that’s programmed with one primary directive: don’t let them leave. Everything else is window dressing. The friendly tone, the emojis, the “I understand” statements, they’re all theatrical props designed to make you feel like you’re having a conversation when actually you’re navigating an obstacle course.
The chatbot achieves this through a few reliable techniques. First, it makes you explain yourself. Humans are social creatures; we’re conditioned to justify our decisions when questioned. If someone asks “why are you leaving?” we feel compelled to answer, even when the asker is a piece of software that doesn’t give a toss about our reasons. This buys time and creates friction.
Second, it offers alternatives. “Would you like to pause your subscription instead?” “Have you considered our basic plan?” “What if we gave you three months at 50% off?” Each offer requires a decision, and every decision is a moment where you might think “actually, maybe I’ll just keep it.” This is the same psychology that makes people buy extended warranties they’ll never use. You’re tired. You’re annoyed. You just want this to end. Saying yes to a discount feels like a compromise, even though you’re still paying for something you don’t want.
Third, it creates ambiguity about whether you’ve actually cancelled. You click “cancel,” but then Ken says “your subscription will remain active until the end of your billing period” or “you can reactivate any time by logging back in.” This leaves you uncertain. Have you cancelled or haven’t you? You’ll probably check again next month, just to be sure, and by then you might have forgotten why you wanted to cancel in the first place.
The really insidious part is that chatbots are marketed as efficiency tools. Companies tell shareholders they’re “improving the customer experience” and “reducing wait times” by implementing AI-powered support. And technically, they’re not lying. You don’t wait for a human. You get an instant response. But the response is worthless because it’s not designed to solve your problem, it’s designed to make solving your problem just difficult enough that you’ll tolerate the problem instead.
Ken Is Everywhere
This isn’t unique to subscription services. I’ve encountered versions of Ken everywhere. There’s the Ken who handles airline refunds and insists I fill out a form that requires my booking reference, passport number, blood type and a sworn statement from my mother confirming I’m not trying to defraud the system. There’s the Ken who manages my broadband account and responds to “my internet isn’t working” with a fourteen-step troubleshooting process that includes unplugging the router, waiting exactly thirty seconds (not twenty-nine, not thirty-one), plugging it back in, and sacrificing a small woodland creature to the gods of connectivity.
And then there’s the Ken I encountered trying to dispute a charge on my credit card. This Ken was particularly special because he combined the usual chatbot incompetence with an additional layer of security theatre. Before he’d even discuss the charge, I had to verify my identity by answering three security questions, including “what was your first pet’s name?” which I got wrong because apparently I entered “Mr. Whiskers” into the system twelve years ago when I actually called him “Whiskers” and now the database is convinced I’m an imposter trying to steal my own identity.
After failing the security questions, Ken cheerfully informed me I’d need to call the fraud department directly. He even provided a phone number. I called the number. It was an automated system. The automated system asked me to input my account number using my phone’s keypad. I input the account number. The system said “I’m sorry, I didn’t get that. Please try again.” I tried again. It didn’t get it again. After three attempts, it said “I’ll transfer you to a representative.” I waited on hold for forty-five minutes listening to a loop of royalty-free jazz that sounded like it had been composed specifically to induce apathy. When a human finally answered, the first thing she said was “can I take your account number?”
This is what efficiency looks like in 2025. We’ve automated customer service to the point where it takes exponentially longer to resolve simple issues than it did when you could just walk into a bank and talk to an actual human being who had the authority to look at your account and say “yes, that charge is wrong, we’ll fix it.”
What to Actually Do About It
The chatbot lie only works if you play along. The moment you understand that Ken isn’t trying to help you, that Ken is actually a sophisticated stalling mechanism designed to protect corporate revenue, you can start treating the interaction for what it truly is: a hostile negotiation with an algorithm.
So here’s what I actually do now, and what you should do too if you ever find yourself trapped in conversation with a customer service chatbot that’s clearly wasting your time.
First, don’t explain yourself. Don’t tell Ken why you’re cancelling or what’s not working or whether you’ve tried the new features. Ken doesn’t care, and even if he did, he’s not capable of processing your actual concerns. He’s matching keywords to pre-programmed responses. Your lengthy explanation about how the meditation app made you feel worse rather than better is just noise to Ken.
Say exactly what you want: “Cancel my subscription.” If Ken asks why, say it again: “Cancel my subscription.” Do not engage with the therapy session Ken’s trying to initiate.
Second, immediately look for escape routes. Most chatbots have a “talk to a human” option buried somewhere in the interface. It might be disguised as “technical support” or “complex enquiry” or hidden behind a “still need help?” link that only appears after Ken’s cycled through his full repertoire of useless suggestions.
Find that option. Click it. Don’t feel guilty about “wasting” a human agent’s time. That’s what they’re there for, and they’re probably grateful for the break from dealing with Ken’s failures.
Third, document everything. Take screenshots. Note dates and times. If Ken tells you your subscription is cancelled, screenshot it. If Ken promises a refund, screenshot it. Chatbots are designed to be slippery. They’ll tell you one thing, and then the billing system will do another thing, and when you complain, there’ll be no record of what Ken promised because Ken’s conversation logs are designed to protect the company, not you. Your screenshots are evidence.
Fourth, know when to abandon the chatbot entirely and go directly to your bank. If you’ve asked three times to cancel something and the company’s still charging you, dispute the charge with your card provider. Yes, this is nuclear. Yes, it might burn bridges. But it’s also effective, and sometimes effectiveness matters more than maintaining a polite relationship with a company that’s clearly not interested in maintaining one with you. Your bank has humans who process disputes, and those humans have actual authority to reverse charges. They’re infinitely more useful than Ken.
And finally, accept that this is going to keep happening. The chatbot lie isn’t going away. It’s spreading. More companies are deploying more Kens to handle more aspects of customer service because Kens are cheap and humans are expensive and shareholders care more about quarterly profits than they do about your experience trying to cancel a meditation app. The technology will get better at pretending to understand you, but it won’t get better at actually helping you because that’s not what it’s designed to do.
The Question
The question isn’t whether chatbots work. They work brilliantly. They reduce support costs. They deflect cancellations. They make customer service interactions just annoying enough that you’ll think twice before initiating another one. They’re working exactly as intended. The question is whether you’re going to keep pretending that Ken’s your personal assistant, or whether you’re going to treat him like what he actually is: a digital bouncer stationed at the exit, whose job is to make leaving as unpleasant as possible.
I eventually cancelled that meditation subscription. It took three separate attempts, two emails to the company’s support address (which also went to an AI chatbot initially), and finally threatening to dispute the charge with my bank. They cancelled it within an hour of that threat. Funny how actual consequences produce actual results.
And the worst part of all of this is that I’m not even stressed about it anymore. I’ve achieved a sort of zen-like acceptance that this is just how things work now. Every subscription is a trap. Every “free trial” is a countdown to the day you’ll be negotiating with Ken. Every “customer service” portal is a maze designed by people who view customer retention as a metric to be optimised rather than a relationship to be earned.
I haven’t tried meditating since. Don’t need to. I’ve learned to find inner peace through accepting that the world is fundamentally designed to frustrate me, and there’s nothing I can do about it except document everything, avoid the chatbots, and keep my bank’s dispute line on speed dial.
Rather like standing at that self-service checkout, holding your digestives, waiting for Gary to acknowledge your existence. Except Gary’s is real. Online, there’s just Ken. And Ken’s never going to help.
