- Attrove
- Posts
- Token Legends and Other Ways to Miss the Point
Token Legends and Other Ways to Miss the Point
You can't scoreboard your way to good work.
In case you missed it: everybody must use AI. If you don't, you'll be left behind.
Use it for what? Left behind by who?
Doesn't matter. Just figure it out.
A Meta employee recently built an internal leaderboard that ranks roughly 85,000 employees by how many AI tokens they burn through. They call it Claudeonomics. Top performers compete for titles like Token Legend, Session Immortal, and Cache Wizard. In one 30-day window, total usage crossed 60 trillion tokens. The person at the top averaged 281 billion on their own, which at current Claude pricing works out to north of $1.4 million in a single month. Mark Zuckerberg isn't in the top 250. Neither is the CTO.
$1.4 million for one employee. For what?

Oh, the folly
My Wharton professor Ethan Mollick recently pointed back to a paper I hadn't thought about in years. Steven Kerr wrote it in 1975 and titled it, with admirable bluntness, "On the Folly of Rewarding A, While Hoping for B." The thesis is exactly what it sounds like. Companies constantly pay for one behavior while quietly hoping for another.
Fifty-one years later, we're doing it again. This time with AI tokens.
There's a fear running through a lot of companies right now, and it's existential. What if AI replaces us? Scary. And understandable. The reflexive response has been to push adoption as hard as possible and reward anyone visibly participating. But reward what, exactly? Usage. Not outcomes nor output.
The Predictable Failure Mode
People are simple. At our core, we choose easy, fast, and cheap. It's a survival mechanism, but it can be corrosive when left uncontrolled.
When the scoreboard is an input, people optimize for the input. The information reported that some Meta employees were running agents idle for hours to climb the rankings. One put it plainly: "You don't want to be the one who solved it in two prompts if everyone else is showing ten."
That's not a story about AI. That's Goodhart's Law in a with a fresh coat of paint.
The Nuance
Ask anyone who knows me: I'm the first person to tell a friend, a customer, or my own team to go try the new thing. New tool that might save you a headache and a few hours a week? Try it. Test it. Will it be perfect? Probably not. But I can't think of a better way to learn about yourself and the world than running small experiments with new tools.

So yes, adopt the technology. Learn the new processes. Just not with a scoreboard that tracks input.
Start from First Principles
Everyone loves the idea of first principles. Not many people actually use them. So let me ask: what's the point of using AI?
The answers are innumerable and deeply contextual. But if you thought, even for a second, "because someone told me to," I'd raise a flag.
That's the move I keep coming back to at Attrove. Start with the work. What actually needs to happen this week? Where is the friction? Then ask where a model can take a bite out of it. Tools chase work. Work should never chase tools. And yet, that’s exactly where we are right now as we are all collectively trying to figure this AI thing out.

I use Claude Code for almost everything now. Product management, customer research, meeting prep, marketing drafts, even the videos I make. It doesn't feel scattered because I'm not switching tools for each job. I set the project up once, with the skills, MCP servers, and frameworks I need, and then I stay in one place. The work sits in the center. The tool wraps around it.
If I were being graded on tokens burned, I'd probably get a B+. I delete a lot. I stop a lot. Most of the value shows up in the session I closed after ten minutes because I got what I needed, not the one I left running overnight.
The Outcome, Not the Process
None of this is a shot at the people on that Meta dashboard. I commend anyone willing to take a risk on a new tool, put it in front of actual work, and be visibly bad at it for a while. That's how adoption happens. Most experiments won't land the first time, and that's fine. It's the cost of learning.
But if you're running a team right now, Kerr's question from 1975 is worth sitting with: Is the thing you're celebrating actually the thing you want more of?
Because the alternative is a company full of Cache Wizards who never shipped anything.
88% resolved. 22% stayed loyal. What went wrong?
That's the AI paradox hiding in your CX stack. Tickets close. Customers leave. And most teams don't see it coming because they're measuring the wrong things.
Efficiency metrics look great on paper. Handle time down. Containment rate up. But customer loyalty? That's a different story — and it's one your current dashboards probably aren't telling you.
Gladly's 2026 Customer Expectations Report surveyed thousands of real consumers to find out exactly where AI-powered service breaks trust, and what separates the platforms that drive retention from the ones that quietly erode it.
If you're architecting the CX stack, this is the data you need to build it right. Not just fast. Not just cheap. Built to last.

