The AI Performance Review: Judged by a Spreadsheet's Gaze
How being measured by an algorithm is like being judged by your Fitbit's disappointed face
When I was thirteen, I got a school report that said I was “cooperative but lacks focus.” This felt deeply unfair because I’d spent the entire term obsessively focused on geography - just not the bits Mrs Henderson was teaching. While she was droning on about ox-bow lakes and precipitation patterns, I was drawing detailed maps of countries, complete with trade routes, mountain ranges, and geopolitical borders. I was intensely focused on geography, just not her version of it.
The report lived in a drawer for years. My mum would drag it out occasionally like it was the Magna Carta, irrefutable proof that I’d always been a bit shit. Never mind that Mrs Henderson also gave top marks to Simon Fletcher, who thought Morocco was in South America and once ate a crayon for a bet. The system was flawless.
The point being: I wasn’t unfocused. The assessment just measured the wrong thing.
The Rise of Robot Resources
I’ve been watching mates navigate the brave new world of AI performance reviews, and it’s basically the same bollocks except now it’s got graphs and the unearned confidence of a cryptocurrency enthusiast. Instead of one bored teacher with a pen, it’s an algorithm with opinions and absolutely no self-doubt.
Companies are rolling out these systems that promise to “objectively measure” how well you’re doing by tracking your digital footprint. Emails sent. Meetings attended. Slack messages. Response times. Calendar density. How often you click things. How fast you click them. Whether you’re clicking with sufficient enthusiasm.
It’s like having a Fitbit for your work life, except instead of passively judging you for not walking enough, it’s actively deciding whether you deserve a pay rise based on whether you’ve hit your daily quota of looking busy.
The pitch is always the same: finally, we’ll have a fair, unbiased assessment. No more personality contests where Dave gets promoted because he laughs at the boss’s terrible jokes. Just clean data, pure metrics, mathematical truth delivered by an impartial machine that definitely won’t just automate all our existing crap and make it worse.
The Theatre of Digital Busywork
A mate of mine is a software architect. Brilliant at it. Spends her time thinking deeply about complex problems, sketching solutions on actual paper like some kind of medieval scholar, having long, focused conversations with the two or three people who actually understand what she’s talking about. When something’s genuinely difficult and potentially expensive to fuck up, she’s the person everyone wants.
According to her company’s new AI review system, she’s underperforming. Not enough emails. Low meeting attendance. Terrible Slack engagement. Insufficient digital noise. The algorithm has looked at her work pattern and concluded she’s basically having an extended nap at her desk.
Meanwhile, there’s this absolute weapon on her team who sends approximately four thousand status updates per day about tasks he hasn’t finished, schedules meetings to discuss having meetings about other meetings, and responds to every Slack message within thirty seconds like he’s being held hostage by his own notifications. The AI thinks he’s magnificent. Employee of the month material. A shining example of modern productivity.
He’s not doing better work. He’s just noisier about being mediocre. It’s like judging a chef based on how many times they clang pots together rather than whether the food’s any good.
What AI Actually Sees
Performance reviews can’t measure whether you’re good at your job. They can only measure whether you generate the right kind of data farts while doing your job.
These systems track everything you do on company systems. Time spent in applications. Email velocity. Meeting density. Document edits. Calendar utilisation. Mouse movements, probably. Some of them even have an “influence score” that measures how often other people mention you in their work, which is supposed to show you’re collaborative but actually just shows you’re louder than the person next to you.
Then it feeds all this into a model that’s been trained on what high performers supposedly look like. Except the model wasn’t trained on actual performance, obviously, because that would require human judgment, and the whole point of this exercise is to avoid human judgment because humans are biased and flawed and can’t be trusted.
So instead, it’s trained on the digital behaviour of people who got good reviews in the past. Which means if your company’s managers historically liked people who were very visible and very busy-looking, congratulations, you’ve now automated that preference, dressed it up as science, and made it impossible to argue with because it’s got decimal points.
The AI can’t see the person who spent three hours preventing a complete disaster. It can see the person who sent seventeen emails about a disaster they’re currently causing while copying in everyone they’ve ever met.
The Gamification of Employment
The really stupid bit is what happens once people work out they’re being measured. Because humans are quite good at gaming systems, especially when their mortgage depends on it, and the system is essentially a very expensive pedometer for your inbox.
Suddenly, everyone’s optimising for metrics instead of results. People start CCing everyone on emails nobody needs to read because email count apparently matters. They book meetings that could’ve been a text message because meeting attendance is tracked. They leave vapid comments on every shared document just to prove they’re “engaged” with the content, even though the content is usually a PowerPoint about quarterly targets that could’ve been a single number written on a napkin.
I know someone who’s worked out that his company’s system tracks “collaboration” by counting how many different people you interact with, so he’s started randomly messaging people in other departments with questions he doesn’t actually need answered. Just pinging them with “thoughts on this?” like he’s conducting a very boring survey. His collaboration score has gone through the roof. His actual colleagues think he’s an irritating prick.
The system would probably give him a knighthood if it could.
What to Actually Do About It
I can’t tell you how to fix this because it’s not fixable. If your company’s decided to outsource human judgment to an algorithm with the emotional intelligence of a parking meter and the wisdom of a Magic 8-Ball, you’re stuck with it until they quietly bin the whole thing in eighteen months after it’s caused a minor exodus of everyone who was actually good at their job.
But here’s what I’ve watched people do to survive.
Work out what’s being measured and feed the beast. You can usually figure out the metrics by watching whose scores go up and wondering what the hell they’re doing differently. If it’s the people in every meeting, start attending more meetings. If it’s the people shipping finished work, focus on that. If it’s the people who send the most emails, well, start generating some textual spam. Don’t completely prostitute yourself to the algorithm, but give it enough data that it stops thinking you’re dead or on an extended toilet break.
Document your actual work somewhere the machine can see it. AI systems are essentially very confident idiots that can only judge what’s directly in front of them. If you solved a massive problem but didn’t log it in the project management software, it might as well not have happened. You could’ve saved the company fifty grand and the AI wouldn’t give a toss because there’s no ticket for it. Make sure your actual accomplishments show up in whatever systems are being monitored. Not as theatre, just as evidence you exist and occasionally do things.
Keep your own records of what you actually did. When the algorithm inevitably gives you a baffling score that makes no sense to anyone with functioning eyes, you’ll want to be able to point at concrete things you delivered. The system might say you’re rubbish, but if you can show the three projects you shipped, the five fires you put out, and the two complete disasters you prevented, at least you’re arguing with facts rather than the machine’s vibes.
Know when you’re in a fundamentally stupid place. If your company’s so committed to algorithmic judgment that they’ll ignore obvious competence in favour of whatever the spreadsheet reckons, that’s not a workplace that values human beings. It’s a workplace that values the appearance of having a system. You can stay and play the game if you need the money, or you can leave and work somewhere that hasn’t yet automated its common sense into oblivion.
The Permanent Digital Record
That school report turned out to be completely meaningless. Mrs Henderson’s opinion of my focus had absolutely no bearing on anything that came after, except as a weapon my mum could deploy during arguments about why I never amounted to anything.
The difference now is we’re pretending the algorithm’s opinion is the objective truth. It’s got data, it’s got confidence scores, it’s got dashboards with colours on them. Therefore it must be right. It’s basically God but for spreadsheets.
It’s still just Mrs Henderson with a red pen, making arbitrary judgments based on incomplete information. Except now she’s a very expensive piece of software that’s somehow even worse at knowing what good work looks like.
At least Mrs Henderson had actually met me. The algorithm’s never even seen your face.
