Ilya Sutskever just spent 96 minutes explaining why the scaling era is over and we need a fundamentally new approach to AI.
Three days earlier, Google shipped Gemini 3 and declared it their biggest performance jump ever.
One of them is profoundly wrong. And if you’re building on these systems, the answer matters.
I watched the full Dwarkesh interview so you don’t have to. Below is what Ilya actually said, where I think he’s right, where I think he’s wrong, and what it means for anyone deploying AI today.
Here’s what I cover:
The Adaptive Prompt — tests whether a model can actually update its thinking when requirements change, or whether it just starts over from scratch every time
The Premortem Prompt — forces the model to imagine failure before committing to a plan, compensating for the missing gut feeling that tells humans “this seems dangerous”
The Strategy Fan Prompt — breaks out of the narrow set of tactics models learn during training and surfaces genuinely different approaches, not just variations on the same idea
The Harsh Reviewer Prompt — makes the model attack its own output before you ship it, catching the gap between “looks right” and “actually works”
The Spec Decomposer Prompt — forces the model to think about what actually needs to be solved before jumping to a template, producing solutions that hold up when your context changes
I’ll tell you where I think Ilya is right, where I think he’s wrong, and what that means for how you build, evaluate, and deploy AI systems right now.
Listen to this episode with a 7-day free trial
Subscribe to Nate’s Substack to listen to this post and get 7 days of free access to the full post archives.













