The PM-ification of Everyone

May 8, 2026

At some point, if you use ai enough, you'll read something you wrote and feel slightly off.

Not factually wrong. The argument is fine and the sentences are clean. But the voice isn't quite yours. Too many short declarative lines stacked together. A "not X, but Y" construction you didn't put there consciously. The efficient, frictionless prose that sounds like it came from something that doesn't sleep.

I noticed it on a Sunday in a coffee shop. After reading @thedankoe essay on writing more essays, I've been spending Sundays writing. While working on a piece on AI at hedge funds, a topic I'm inside every day, I read it back and something didn't sit right. As usual, i'd been deep in llms all week, building on outputs, iterating, sending them out. Somewhere in that loop the rhythm transferred. The piece made sense, it read nice, but it just didn't sound like me.

That's the thing that really bugged me. Not that the output gets worse. Often it gets better. It's that you start to sound like the ai back.

There's a term for what's happening underneath. Cognitive offloading. Delegating mental tasks to external systems. It's not new, we've been doing it since calculators, since GPS, since Google. The "Google Effect" documented how search engines changed the way people store information. When you outsource remembering, you remember less.

A 2024 study of 666 people found a meaningful negative correlation between heavy AI use and critical thinking ability, with cognitive offloading as the mechanism in between. The more you delegate the thinking, the more the thinking atrophies. Younger participants showed the highest dependence and the lowest scores, which tracks. We adopted fastest and most completely.

What makes AI different from the Google Effect isn't just scale but the layer it operates on. Search engines outsourced retrieval. AI outsources both the retrieval and the generative work including drafting, structuring, and arguing. When you hand that off consistently enough, the muscle that atrophies is a different one.

Researchers have started calling it "metacognitive laziness." When the convenience of a tool starts bypassing your own reasoning before you've even tried. Not because you decided to. Because that's what convenience does when you stop paying attention.

I build AI tools for a living. I probably use this more hours per day than most people writing about its effects. So when I say I've watched the delegation creep in both directions, in my own work and in the people around me, I'm not speaking from the outside.

It started with the complex tasks. Research, synthesis, long-form writing. Then it crept into emails, into quick tasks. Into things that would've been faster to just do myself. The calculus kept shifting, not because the ROI was there, but because the reflex was. Opening my Claude tab had become the default before I'd even decided to.

The vocabulary thing is the most discomforting part for me. I didn't notice my critical thinking slowing because that's hard to measure. I noticed my writing starting to look like what I'd been reading. The specific phrases. The sentence structures. I spend most of my working day in conversation with Claude. Apparently it goes both ways.

It's hard to catch because the output still looks fine. Might look better than what you'd have written on your own. The prose is tighter, the structure cleaner, the argument more organized. No signal that something is off, overall the metrics give you green lights.

What you've outsourced is harder to see. It's the rough thinking. The false starts and the sentences that don't work and teach you why on the way to the ones that do. That friction isn't waste but where the cognitive work happens. When you skip it consistently enough, you you lose the process that was making the thinking yours. The output is still there, the process has changed.

You don't notice the atrophy until you need to do it without the tool. Like a muscle you haven't used in months. Works fine on the surface. Different when you put weight on it.

Here's where my own argument gets complicated, and I want to be honest about it.

AI makes me faster. Capable of things I genuinely couldn't do before...research that would've taken weeks, analysis that would've needed a team. Some of what I've built in the last two years doesn't exist without it. Of course I would choose to have AI over not.

Maybe the direction of knowledge work is toward delegation and product thinking rather than deep execution. Maybe "doing the work" the way I've understood it is already a legacy concept, and what I'm feeling is just friction from a transition that's been decided.

I don't think I believe that. But I can't dismiss it either.

What I know is I don't want to spend my days reviewing AI outputs and calling that work. Not because it's inefficient but it's f*cking boring. The part I care about happens in the difficulty and in the thinking you have to do yourself, the sentence that won't come out right the first six times, the argument you can only find by actually arguing it.

Maybe that's a preference, not a principle. I'm still working out which one.

What worries me isn't the people who notice this and adjust. It's the ones who don't notice because the output is still good. The manager reviewing AI-generated work and calling it done. The analyst who hasn't written a real argument in six months because there was always a better tool for it.

Everyone is becoming a PM.

-- Gabriel