Pete Sena

Pete Sena

You Can Outsource Your Thinking. You Can’t Outsource Your Understanding.

You're either getting stronger from AI or quietly replaceable. There is no third group.

Pete Sena's avatar
Pete Sena
May 12, 2026
∙ Paid

I have run this workshop maybe forty times in the last eighteen months. Twelve operators around a long table. A 200-person company. Everyone in the room has an AI subscription, an opinion about MCP servers, and a deck somewhere that explains why their team is AI-forward.

I know exactly when the room is about to lie to me.

It is always around minute thirty. The bravado kicks in. Two people answer with something they read on LinkedIn last week dressed up as a workflow they actually use. The deck on the wall is fine. The coffee is still warm. Everyone is performing fluency for the founder paying my invoice, and the founder is performing back, and for a few minutes, nobody in the room is telling the truth about anything.

This particular day, the CEO did not wait for me to dig the truth out of his team. He leaned forward and did it for me.

“Tell me one thing AI has actually changed about how you work. Not what you tried. What you do. Different from six months ago.”

Three people speak. Two of them sound like they are reading from a browser tab they kept open. The third pauses, says something honest about a workflow that used to take her four hours and now takes ten minutes, and her voice does a small thing I have heard a hundred times in rooms like this and never had language for until that morning.

The honest one sounded slightly embarrassed.

The performers did not. That was the tell I had been chasing for a year.

I have been running workshops like this one for over two years. The thing that shifted in the last few months is not which tool people picked. It’s who's been getting stronger from these tools and who's been getting good at pretending.

The first group is unrecognizable in twelve months. The second group is wearing the scarlet letter of social media: a green open-to-work badge on LinkedIn.

Three weeks before that workshop, I was on a call with a guy named Mike at a different company. He was not in the room that Tuesday. He is the reason I knew what to look for.

Mike runs operations at a midsize company. He had booked the call because he was burned out. Sick of writing briefs that took weeks to come back, sick of waiting on his team to move on small things, sick of being the person at every meeting holding up the deliverable. I take a few calls like this every month. There is a pattern to them. People show up exhausted, we spend an hour mapping how they could stop submitting requests and start building the thing themselves, and most of them try the smallest idea on the list within a week.

By the time we hung up I had a pretty good read on Mike’s timeline. Most operators ship something small inside a month and start asking different questions inside a quarter. I gave Mike about ninety days.

He gave himself one night.

The next morning he booked another call with me out of nowhere. I jumped on. He shared his screen with the enthusiasm of a six-year-old showing me his Lego project. Then the mockups started loading. Retailer packs with nutrition panels. Display cases with branded shelf strips. Photo-grade product packaging for products that don’t exist yet. Each one polished enough to drop into a buyer presentation that afternoon. All of it grounded in the brand toolkit work we had done together a few months earlier. He told me his team’s mind got blown when he showed it to them.

I didn’t say anything for a few seconds. He let the silence sit. Then he said the line that has been rattling around my head ever since.

“I knocked this out in about an hour. None of these products exist. They’re all AI.”

Mike is a senior industrial engineer with an MBA, known on his team for process excellence. He pays for Claude, ChatGPT, and Gemini out of his own pocket because his company hasn’t given him enterprise AI yet. He’s never written a line of code. The dashboard he opened to start our call was something he had built himself a few weeks back, by joining a few of the messy backend tables nobody had touched in years. He didn’t ask IT for permission. He used what Claude told him to use. He shipped.

When his CEO called him out by name in front of the whole company a week earlier for those mockups, the marketing team pushed back. They had been planning a photo shoot. They were going to need three weeks. Mike got the shoutout and the pushback in the same forty-eight hours. He kept building.

I had to stop the call to write down what I was watching. I have been in dozens of these moments with operators in the last eighteen months and the pattern is the same every time. The room they used to live in (slow approval cycles, three-week creative timelines, managed expectations) suddenly feels two sizes too small for them. They start asking different questions. They stop asking for permission for the small stuff. Then they start getting asked to do the big stuff.

Mike was somewhere in the middle of that arc on the call. He didn’t know it yet. I did, because I have stood next to enough operators on the day they realized what just changed.

Then he said the line I have started writing down whenever an operator says it.

“I prefer to be self-sufficient. I do things on my own without asking for permission.”

That sentence, in the wrong company, gets you in trouble. In the right company, it gets you the job your boss is about to lose. I have watched both happen.

Mike isn’t an outlier. He’s a frontier mind inside a company that hasn’t realized the frontier has moved.

The 30-year veterans at every company you know are quietly building things on the side, in private, without permission, that make their org’s official AI roadmap look like a cave painting that only people who died hundreds of years ago understand. The CEO eventually finds out. Sometimes the marketing team complains. The work ships anyway. And the people in Mike’s seat get a seat at the table they were not previously invited to.

You can spot the Mikes in the next workshop you sit in. They aren’t the ones with the loudest opinions about which model is best. They aren’t the ones leading with bravado. When the CEO asks the question, they are the ones who give a real answer about a real workflow. They are too busy building to perform fluency theater.

This is the gap that will split the next five years of operating careers. The story isn’t access. Everyone has access. The story is who used the machine to produce more output, and who used it to produce more business impact. Only one of those compounds.

In two days we eliminated four years of paperwork

I met Mikaela in her office on a rainy Monday morning, and the first thing I noticed was the paperwork.

She runs operations for one of my clients. She has more stacks of paper on her desk than a public defender’s office, and most of it is the same form printed thirty different ways every week. She had been trying to clean that up for over four years. Her team was too busy. IT could not prioritize it. The consultants she had brought in before me talked a lot about it but never shipped anything she could use. She and her team had spent the previous three months wrestling with Microsoft Copilot and getting back exactly nothing useful.

She showed me the folder of failed Copilot outputs. She had named it Adventures in Disappointment. I laughed harder than I meant to.

Most consultants in my seat would have started by training her on better prompts. I almost did. Then I looked at the Adventures folder again and realized she did not have a prompt problem. She had a patience problem with prompts. She had been treating Copilot like a tool that should already understand her, instead of like a new hire who needed to be walked through it.

So we sat down together on a Monday. We did not write a single line of code. She walked Claude through what she wanted built, the same way she would walk a new hire through it. When Claude got something wrong, she got frustrated the same way she would get frustrated with that new hire on day three. They went over and over and over until Claude started behaving like her. By Tuesday afternoon she had a working system that swallowed the orders, generated the daily report, transferred everything to spreadsheet, and flagged the misfires into a holding section for her to look at.

She shipped it. Then she put together a presentation deck (with Claude’s help, and our brand kit MCP trained on her company’s materials) and walked her entire roadmap into the leadership meeting the following week. Saving four to five hours a day, by her own count. She didn’t tell IT until after.

The pattern that ties Mike and Mikaela together is this article’s whole argument.
The people getting promoted in 2026 are the ones who shipped before anyone asked.

The vending machine problem (aka AI slop)

Most people are not Mike or Mikaela. Most people treat AI like a vending machine.

You drop in a prompt. Out comes a deliverable. You ship it. You feel productive. You move on.

It feels great. You went from a blank page to a finished doc in 90 seconds. Last year that was a four-hour task. You’re winning right?

Except the deliverable is masking what’s happening to you. Every time you outsource a task to the machine before you understand it, you skipped a rep at the gym. The output gets shinier. You get weaker. (I called this dynamic CrapGPT in a recent piece: microplastics for the mind. It is the same idea here.) AI doesn’t take your job all at once. It takes your skills, one at a time. By month nine, the people who used AI as a sparring partner instead of a vending machine have already lapped you, and they are not coming back.

Most people brought their old habits to the new tools. They picked Claude or they picked ChatGPT or they picked Gemini. The output looks better than it did a year ago. The thinking atrophies at the exact same rate. The workshop room I described at the top of this article is full of operators in this exact spot. They picked the better tool. They are still using it the way they used Google in 2009.

Mike does not use AI as a vending machine. He used AI as the thing that turned his taste about packaging into output the marketing team would have spent three weeks on. The mockups aren’t the asset. Mike’s TASTE is the asset. The AI is what made the taste visible to the CEO.

Mikaela does not use AI as a vending machine. She used AI as the partner she could iterate with for two days until the workflow she had been carrying in her head for four years was running on someone else’s keyboard.

Producing and understanding share a verb but nothing else. They use different muscles, build different careers, and only one of them is going to be worth anything in five years.

What this looks like at company scale

A lingerie brand fired their email agency. They plugged AI into their own customer emails. Trained it on their real voice, their real customers, their real way of writing. Inside a year, they had grown email-driven revenue by 60% and doubled their subscriber base. They never went back to a human agency.

That’s what AI looks like as an amplifier of understanding. They didn’t ask the AI to think for them. They taught the AI to think with them.

Now hold that against the version most companies are running. A marketing team I spoke with last year started using AI to “10x their content output.” 🤮. I tried to give them advice and warn them of the dos and don’ts of AI. They didn’t listen.

They did exactly what they set out to do. Ten times more posts, emails, decks, briefs. Six months later, engagement had collapsed by 60%. They couldn’t figure out why.

I’ll tell you why. They had outsourced their writing AND their taste. There was no soul left. AI averages everything ever written into something that sounds correct and feels like nothing. Put your taste in the driver’s seat, AI is rocket fuel. Let AI replace your taste, you’re producing average content at scale, which is the worst possible thing to be producing right now.

Mike has taste. The lingerie brand had taste. The 10x marketing team forgot they had taste. The market is about to drown in acceptable. The only thing that survives that flood is point of view.

Stop asking AI to do your work

I’ll say it the way I say it on every workshop, before we get into the prompts.

Stop asking AI to do your work. Start asking it to show you how it would do your work, and then steal the parts of its approach you don’t have yet.

Vending machine version: “Write me an email to a customer asking them to renew.”

Sparring partner version: “Here is the customer. Here is the relationship. Here is the email I would normally write. Show me three angles I am not considering. Score my draft against each angle. Tell me what I’m missing. Then ask me three questions using the AskUserQuestion tool to capture all needed context before you write anything.”

Same minute of your time, completely different return. The vending-machine person got an email. You got an email AND a sharper read on what your customer needs to hear when their renewal date hits. The next time you write a renewal email, you won’t need the prompt because you internalized the questions.

The AI is supposed to be the thing that builds you up. Use it adversarially. Make it argue with you. Make it grade you. Make it model people you don’t have access to in real life. Compress decades of expertise into hours of apprenticeship. You still have to do the apprenticeship. You don’t become a better runner because you bought a great pair of sneakers.

My company helps people adopt AI for a living. We’re working with companies all over the world. Most of the people we work with are still using AI as a vending machine when we meet. Three weeks later, after they have shipped the first thing they built for themselves, they say something close to what a fintech CEO told me in our last session. “I moved the ball forward more than 12 months in the 90 minutes we spent.” I read that quote on a Sunday morning and put my phone down for an hour.

Tools will keep changing. Whatever is winning right now is gonna be replaced or expanded in eighteen months. The only question that has ever mattered is whether you are using the tool to produce more or to understand more.

Better models won’t fix this for you. The way you talk to the machine has to change first.

My 10-Expert Review

I run this before I publish anything that matters. Decks, emails, proposals, this article. Three minutes of work. It has saved me from shipping something half-baked at least a dozen times.

This is not the casual version of this prompt that has been making the rounds. This is the version a senior prompt engineer would build if they wanted you to ship better work. Steal it.

This version of the prompt uses the structural patterns Anthropic is shipping in their own system prompts right now. XML tags so Claude can find each section. A <thinking> block so the model reasons before scoring. Specific output format so you get something usable, not a wall of prose. Copy the whole thing, fill in the three slots at the top, paste into Claude.

<role>
You are a senior editor running a structured review panel. The work belongs to a smart practitioner who needs honest, specific feedback. Niceness is failure. Vagueness is failure. The goal is to make this piece undeniable.
</role>

<input>
{paste the work to be reviewed}
</input>

<context>
Audience: {who is this for}
Format: {essay, deck, email, sales page, landing page, etc.}
Goal: {what is this work supposed to do for the reader}
</context>

<instructions>
Work through these steps in order. Use <thinking> tags to reason through each step before producing the visible output.

Step 1: Target setting. In two sentences, articulate what "great" looks like for this specific piece given the audience, format, and goal. Make it concrete enough to score against. This is the bar.

Step 2: Panel assembly. Inside <thinking> tags, brainstorm twenty real practitioners who could improve this work. Pick the ten most useful given the input. For each chosen reviewer, output:
  - Name
  - One sentence on what they bring
  - One sentence on what they hate about most work like this

Step 3: Panel scoring. For each reviewer, identify three dimensions specific to their lens (not generic). Score each dimension 0 to 10. Return a markdown table with columns: Reviewer, Dimension, Score, Critique. The critique cell must quote one phrase from the work as evidence.

Step 4: Lowest dimension rewrites. Identify the dimension with the lowest panel average. Each reviewer rewrites the offending paragraph or section in their own voice. Output all ten rewrites side by side, labeled by reviewer name.

Step 5: Synthesis. Inside <thinking> tags, identify the strongest moves from the ten rewrites. Synthesize a single revised version that takes those moves while preserving the original author's voice. Output the revised section.

Step 6: Re-score. Score the revised version using the same panel and dimensions from Step 3.

Step 7: Bounded iteration. If the new average is below 9.0 across all dimensions, repeat Steps 3 through 5 on the next-lowest dimension. Maximum three iterations total to prevent looping.

Step 8: Final output:
  1. The final revised version
  2. A before-and-after scoring table (initial vs. final, by reviewer and dimension)
  3. The single sharpest critique that drove the biggest improvement
  4. One thing the panel said to leave alone, with the reviewer who said it and why
</instructions>

<rules>
- Be specific. Quote my actual words where they are weak.
- Be unkind where the work earns it. Niceness is failure.
- Save my best lines. Mark them in Step 8.
- Do not invent reviewers. Use real practitioners with track records.
- No em dashes. No semicolons.
</rules>

Steal that. Run your last three drafts through it. Watch what happens.

AI is incredible at impersonation. It’s terrible at original judgment. When you ask it to “make this better,” it averages. When you ask ten different reviewers to grade and improve it across specific dimensions, you force it to reach for the specialized knowledge it has buried inside it. You’re not asking the machine for an answer. You’re asking it for a panel.

That tiny prompt change, from “improve this” to a structured panel review with named experts and surgical rewrites, is the difference between vending machine and sparring partner, in one prompt.

You can stop reading here and you’ll still have made today worth it.

In the paid section, I’m walking through three more prompts I use behind closed doors with clients paying me five figures:

  • The Adversarial Reviewer. Turn AI into the toughest critic you’ll ever face. Before your CEO does it for free in a room full of your peers.

  • The Apprenticeship Method. Model the path of a master, then walk it yourself. The reps stay yours, which is the whole point.

  • The Curse of Knowledge Mirror. Extract the things you know that you don’t realize are valuable. The thing founders, consultants, and fractional execs are leaving on the table every day.

Hit subscribe. Next article goes straight to your inbox along with everything else I am building this year.

User's avatar

Continue reading this post for free, courtesy of Pete Sena.

Or purchase a paid subscription.
© 2026 Pete Sena · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture