0:00
/
0:00
0:00
/
0:00
Preview

Prompting just split into 4 different skills. You're probably practicing 1 of them (+ 7 prompts and a pre-flight to close the gap)

Most people think they’re good at prompting. They’re not — they’re good at chatting with AI, which is a different skill that’s rapidly becoming table stakes.

Here’s what I mean. Two people sit down with the same model on the same Tuesday morning. Same subscription, same context window. One types a request, gets back something 70% right, spends forty minutes cleaning it up — good use of AI, maybe 30% faster than doing it by hand. The other spends eleven minutes writing a structured specification, hands it to an autonomous agent, makes coffee, and comes back to finished deliverables that hit every quality bar she defined up front. Does this five times before lunch. Her output this week would have taken three weeks in 2024.

Same model. Same Tuesday. The difference isn’t talent or technical ability. It’s that she’s practicing a discipline that most people don’t know exists yet — and the gap between the people who’ve found it and the people who haven’t is already 10x and compounding. Three model releases in February alone (Opus 4.6, GPT-5.3-Codex, Gemini 3.1 Pro) shipped with autonomous agent capabilities that make chat-based prompting feel like bringing a phone book to a search engine fight. The models stopped being conversation partners and started being workers. Workers that run for hours, then days, without checking in. And the skill of directing a worker you can’t supervise in real time is a fundamentally different discipline from the skill of having a productive conversation.

The word “prompting” is hiding four of those disciplines. This piece names them, shows where each one breaks, and gives you the tools to close the gaps you didn’t know you had.

Here’s what’s inside:

  • The 35-minute wall. Why every assumption in the 2025 prompting playbook collapses once agents start running autonomously — and the Anthropic data that shows exactly where.

  • What Tobi Lütke figured out first. The Shopify CEO’s insight about context engineering that made him a better leader, not just a better AI user.

  • The Klarna trap. What happens when your context is excellent but your intent is missing — and why $40 million in projected savings turned into a customer satisfaction crisis.

  • Five new primitives and a four-month roadmap. The specific skills replacing the old prompt engineering toolkit, in the order that produces the fastest results.

  • 7 prompts and a pre-flight check. A pen-and-paper thinking exercise before you touch AI, a 10-minute quick start that scores where you stand and builds your first context document, and the full build-out — specification engineer, intent framework builder, eval harness, constraint architecture designer, and the problem statement rewriter that trains the Lütke primitive. Start with the pre-flight.

The framework starts with a shift that happened faster than most people registered — and understanding exactly when and why it happened is the key to everything that follows.

Subscribers get all posts like these!

Listen to this episode with a 7-day free trial

Subscribe to Nate’s Substack to listen to this post and get 7 days of free access to the full post archives.