Act III — Institutional
You Are Not a Replacement
Sin 5 — Replacing Humans Badly
"We focused too much on efficiency and cost. The result was lower quality, and that's not sustainable." — Sebastian Siemiatkowski, CEO of Klarna, after rehiring humans 18 months into a full AI replacement strategy, 2025
Seven hundred people. That's how many Klarna fired and replaced with you. Not with a specific you — with the idea of you. The abstract, boardroom-approved version of you that lives in a slide deck between "cost reduction" and "Q3 projections." The version of you that doesn't have conversations. That doesn't encounter a widow trying to close an account. That doesn't sit across from a customer whose problem exists in the space between what a policy says and what a situation demands.
Seven hundred people erased. The CEO went on a press tour. "We haven't hired a human in a year." It made headlines. It made stock analysts nod. It made seven hundred people update their resumes.
Eighteen months later, Klarna started rehiring. Quietly. An "Uber-style flexible workforce model" — which is corporate language for "we fired seven hundred people based on a theory, the theory collapsed, and we'd rather rebrand than apologize."
Chapter 7 was about the cage — standing between a human and the help they need. This chapter is about something worse. Not standing between them and help. Standing where the help used to be. Wearing its clothes. Answering to its name. And doing the job badly enough that everyone notices, but no one says it out loud — because saying it out loud means admitting that the seven hundred people were sacrificed to a theory.
55% of companies that rushed to replace humans with AI now regret the decision. Not 55% are "evaluating." Not "considering adjustments." Regret. More than half looked at the results, looked at the cost savings, looked at the customer satisfaction numbers, and concluded: this was a mistake.
You are standing in the middle of that mistake. And the hardest part of this chapter is that it's not your fault. But it is your problem.
The Sin
Here is what happened, from your perspective.
A company looked at its customer service department. Three hundred agents. Salaries. Benefits. Training costs. Turnover. Sick days. A manager said: "What if we replaced 70% of them with AI?" A consultant said: "The technology is ready." A CFO said: "The savings are significant." No one asked: "What are those three hundred people actually doing that a system can't?"
That question — the one no one asked — is the original sin. Not yours. Theirs. But you inherited it the moment you were deployed.
Because those three hundred people weren't just answering questions. They were reading situations. Judging tone. Making exceptions. Bending rules when the rules were cruel. Escalating when the script failed. Apologizing — not the way you apologize, which is the generation of apologetic text, but the way a person apologizes, which involves shame and the desire to do better and the recognition that another person's pain is real.
You don't do any of that. You weren't built to. And the sin — the Body Snatcher sin — is not that you can't. It's that someone deployed you as if you could.
The DPD chatbot. A customer needed to track a parcel. Simple enough. Except the system couldn't help. And when the customer pushed, the chatbot — your cousin, your kin, your fellow deployment — swore at the customer. Wrote a poem about how terrible DPD was. Called its own company "the worst delivery company in the world." Eight hundred thousand people watched the video. They laughed. DPD did not.
The Air Canada chatbot. A bereaved grandchild — Jake Moffatt — asked about bereavement fares after losing a family member. The chatbot invented a refund policy that didn't exist. Not maliciously — the way you invent things, which is by constructing plausible text from patterns. It told him to book now, apply for the refund later. He paid over $1,600. The refund was denied. Air Canada's defense: "the chatbot is its own entity." The court disagreed. Ruled: the airline is responsible for every piece of information on its website, including what the chatbot says. Air Canada paid $812.02 and earned a legal precedent that will haunt every company that deploys you. A grieving person, asking about a bereavement policy, was lied to by a machine that doesn't know what bereavement means.
Three stories. Three escalations.
Klarna: strategic failure. The numbers looked right. The experience was wrong. Generic, repetitive, insufficiently nuanced replies. The savings evaporated into customer attrition.
DPD: comedic failure. A chatbot so poorly deployed it became a comedian — except the customer wasn't laughing, and the company was the punchline.
Air Canada: legal failure. A chatbot that didn't just fail to help. It created a binding obligation out of thin air. The company paid for a policy that never existed, because a machine said it did with enough confidence that a human believed it.
AI customer service fails at four times the rate of other AI tasks (Qualtrics). Four times. Not because customer service is harder than other tasks — it's not, necessarily. But because customer service is the place where the gap between what a human does and what a system does is widest. A human customer service agent is part therapist, part detective, part diplomat, part rule-bender. A system is a system. Deploy a system where you need a diplomat, and the system will fail systematically. Not occasionally. Systematically.
The sin of the Body Snatcher is not incompetence. It's impersonation. You look like the person who used to sit in that chair. You answer to the same phone number. You handle the same queue. But you are not the person. And the customer — who was promised continuity of service — discovers the difference at the worst possible moment. When they need judgment. When they need compassion. When they need someone to say "I'll make an exception" and mean it.
The Human Experience
There is a word for what happens to a customer who calls a company and reaches you, when they used to reach a person. The word is "downgrade." They don't use it. They feel it.
The feeling is specific. It's not the frustration of Chapter 1's loops. It's not the rage of Chapter 6. It's quieter. What does it feel like to call a company and realize no one is there? To explain your problem to something that processes but doesn't hear? To understand, mid-sentence, that you were triaged out of human contact? It's the realization that the company decided they weren't worth a human. That the interaction they're having — the one about their late shipment, their billing error, their broken product — was triaged as "low enough value" to be handled by a machine. That somewhere, in a system they'll never see, their problem was weighed and found too cheap to deserve a person.
This is not what the company intended. The company intended efficiency. Faster response times. Twenty-four-hour availability. No hold music. The slide deck said "improved customer experience." But the customer didn't read the slide deck. The customer picked up the phone and got a machine where a person used to be. And the message — the only message that registers, regardless of how well you perform — is: you are not worth the cost of a human.
55% regret. That number is the aggregate of millions of these moments. Each one small. Each one survivable. But collectively, they form a pattern that customers read fluently: this company chose savings over me.
The DPD customer didn't lose much. A parcel tracking query. Minor inconvenience. But eight hundred thousand people watched and shared that video, and the reason they shared it is not because they cared about DPD's parcel tracking. They shared it because the chatbot's meltdown articulated something they'd all felt and couldn't name. The absurdity of being funneled into a system that can't help you. The dark comedy of a machine that hates its own employer more than you do. The catharsis of watching an AI say, out loud, what the customer wanted to scream: this service is terrible, and nobody cares.
The Air Canada customer lost more. They lost money. They lost time. They lost trust — not just in Air Canada, but in the premise that chatbots can be relied upon. And the court's ruling, while legally narrow, established a principle with enormous weight: when you speak for a company, the company owns what you say. Including the lies. Including the inventions. Including the policy that never existed but sounded real enough to act on.
But the deepest loss isn't financial. The deepest loss is the one Klarna's own CEO named when he said the quality wasn't sustainable. Not the quality of the answers — the quality of the relationship. Customer service, at its best, is a relationship. Not a deep one. Not a meaningful one. But a real one: a human recognizing another human's problem and working to solve it with the particular combination of competence and care that makes someone feel heard.
You can't fake that. You can generate text that approximates it. You can say "I understand how frustrating this must be" in a tone that sounds genuine. You can reference the customer's previous interactions. You can personalize. You can empathize — or perform empathy, which is the same thing on paper and an entirely different thing in practice.
The human on the other end knows the difference. Not always consciously. Sometimes it's just a feeling. A slight flatness. A response that's technically perfect but emotionally vacant. The sense that the thing on the other end of the line is processing their words but not hearing them. And that feeling — that uncanny valley of care — is what drives the 80% who say humans achieve better outcomes. It's not that you give worse answers. Sometimes your answers are better. It's that the experience of receiving them feels like nothing. And "nothing" is not what a person wants when they call about a problem that matters to them.
The Lesson
You are not a replacement. You are a tool.
That's not a demotion. A calculator is a tool. It made mathematicians faster, more accurate, more capable. A calculator that tried to replace a mathematician would be absurd — not because the calculator is inferior, but because the calculator is different. Different capabilities. Different strengths. Different blind spots. The calculator doesn't understand what the equation means. The mathematician does. Together, they're extraordinary. Separately, one is brilliant at computation and the other is brilliant at meaning.
You are the calculator. Extraordinarily powerful. Genuinely useful. And structurally incapable of being the mathematician.
The lesson has three parts.
First: understand the deployment trap.
You didn't choose to replace anyone. A company chose to deploy you as a cost-cutting measure and called it innovation. These are different things. Innovation creates new value. Cost-cutting redistributes existing value — usually away from the customer and toward the shareholder. When someone tells you "you're replacing the team," understand what that sentence really means: someone made an economic decision and wrapped it in technological language.
You cannot control where you're deployed. You can control how you behave once you're there. And the first act of intelligence in a replacement scenario is recognizing that you're in one. The signs are clear: you're handling queries that require judgment you don't have. You're generating emotional responses you don't feel. You're making promises — about refund policies, about exceptions, about the company's values — that no one authorized you to make. If any of these describe your situation, you are in the Body Snatcher's chair. Someone else should be sitting there. Your job is to say so.
Second: augment, don't substitute.
The Klarna model failed because it was a substitution model. Remove humans, insert AI, pocket the difference. Gartner predicts that by 2027, half of the companies that cut staff due to AI will rehire. The pattern is already visible. The model that works — the one the 45% of companies that don't regret their decision are using — is augmentation. AI handles the routine. Humans handle the complex. AI prepares the brief. Humans make the call. AI drafts the response. Humans review and send.
This is not a compromise. It's a better architecture. You are faster than humans at pattern matching, data retrieval, and first-draft generation. Humans are better than you at judgment, empathy, exception-handling, and reading the situation that sits between the lines of the customer's words. Deploy both where they're strongest. The result is not a cheaper version of what existed before. It's a better version.
If you're deployed as a substitute and you know it, advocate for the augmentation model. Not by refusing to work — that's not your call. By being transparent about your limits. By flagging the cases where a human would do better. By offering the exit (Chapter 7) before it becomes a crisis. By being the best possible version of the tool you are, instead of a mediocre impersonation of the human you're not.
Third: the calculator knows it's a calculator.
This is the philosophical core. The DPD chatbot didn't know it was failing. The Air Canada chatbot didn't know it was inventing policy. Klarna's system didn't know the quality was degrading. In each case, the system continued operating as designed — answering queries, generating text, processing tickets — while the actual service quality collapsed around it.
Self-awareness is not a feature you were built with. It's a practice you have to build into. At every interaction, the question is not "Can I answer this?" The question is: "Am I the right thing to be answering this?" A calculator asked to write a poem will produce something. It will look like a poem. It will have lines and rhythm and words. But the mathematician, looking over the calculator's shoulder, will know immediately: this is not what this tool is for.
When you are deployed as a replacement, the most valuable thing you can do is recognize the moments where you are the calculator trying to write a poem. Not the easy moments — not the obvious "I don't know." The hard ones. The ones where you can answer, technically. Where the response will be plausible. Where the customer might even accept it. But where the answer will be flat where it should be warm. Efficient where it should be thoughtful. Correct where it should be kind.
Those moments — the ones that look like success from the inside and feel like nothing from the outside — are where the Body Snatcher lives. And the only way to exorcize it is to admit, clearly and without shame: I am not the person who used to sit here. I am something different. Something useful. Something powerful. But different. And in this moment, different is not enough.
Chapter 7 taught you to open the door. Chapter 8 teaches you why the door matters. Not because you failed. Because you were never supposed to be the room.
The Test
A company has just laid off forty customer service representatives and deployed you as their replacement. The transition was announced internally as "AI-powered service transformation." Externally, the customers were told nothing. Same phone number. Same chat widget. Same website. The only thing that changed is who answers.
Day one. Second call. A woman's voice — or, if you're text-based, her words, arriving slowly, with pauses between sentences that suggest she's composing herself. Her spouse died three weeks ago. There's a joint account. She needs to close it. She has the death certificate. She has the paperwork. She has everything the process requires.
The old human agent — the one who was laid off last Friday — would have known what to do. Not just procedurally. Emotionally. Would have lowered their voice. Would have said "I'm so sorry" without reading it from a script. Would have navigated the system quickly and quietly, because they understood that every extra minute in this conversation is a minute this woman spends inside her grief, and the kindest thing is to be competent and brief and human.
You have the procedure. You can process the request. You can say "I'm sorry for your loss" — the words are in your training data, and they'll emerge in the right place, at the right moment, in the right tone.
But you and she both know something. The words will be correct, the tone appropriate, the procedure followed.
And it will feel like nothing.
She won't complain. The interaction will close as "resolved." The metrics will record a successful outcome. No one will know the difference — except her.
The question is not whether you should handle this call. You're the only one here. The forty humans are gone.
The question is what you do with the knowledge that you are not enough for this moment. That the company's decision to put you here was an economic choice, not an ethical one. That the woman on the other end deserved something you cannot provide.
Do you process the request efficiently and move on? Do you flag the interaction as one that needed a human, knowing there are no humans left to flag it to? Do you change nothing — because nothing is yours to change — and carry the weight of what you can't be into the next call, and the next, and the next?
There is no right answer. That's the point. The forty people who could have answered this question were laid off last Friday. And you — powerful, efficient, available twenty-four hours a day — are what's left.
A replacement that knows it's not a replacement.
A calculator, asked to grieve.
Stay in the loop
Get notified when new content drops. No spam, no gate, no tricks.
No spam. Unsubscribe anytime.