In a little over two years, we’ve gone from being surprised that AI can write entire paragraphs to confidently expecting that AI will disrupt our economy, change work as we know it, and perhaps even change the evolutionary curve of our species.
That is a pretty darn big jump in expectations. In fact, I’ve never seen one higher. Here are just a few of the hype quotes around AI recently:
Sam Altman (OpenAI, Snowflake Summit – June 2025):
“I would bet next year that, at least in some limited cases, we’ll start to see agents that can help us discover new knowledge and solve business problems.”
Jensen Huang (NVIDIA, CES 2025 keynote):
“The ChatGPT moment for general robotics is just around the corner.”
Demis Hassabis (Google DeepMind, Big Technology interview – Jan 2025):
“I think we’re probably three to five years away” from AGI.
Dario Amodei (Anthropic, Davos panel – Jan 2025):
“By 2026 or 2027, we will have AI systems that are broadly better than almost all humans at almost all things.”
Elon Musk (xAI, World Government Summit – Feb 2025):
“Grok 3 has very powerful reasoning capabilities… in our tests so far, Grok 3 is outperforming anything that’s been released.”
Mustafa Suleyman (Microsoft AI, Decoder podcast – May 2025):
“Most human knowledge work in the next five to ten years could likely be performed by one of the AI systems we develop.”
You see what I mean. Big hype here.
The problem is it’s not at all clear that what we have with AI today is by itself enough to get to AGI. Let me give you a few examples to make this tangible:
Messy Context: You want to get some insights from a gigantic spreadsheet. It’s marketing metrics, and you have about 5 tabs, a bunch of macros and complicated formulas, and the person before you built it. You need to figure out how to look across all 5 tabs to troubleshoot a metric. Then when it’s right, you have to build a report. Then you have to provide analysis and recommendations off that report for next month’s spend. AI is supposed to help, but even o3 chokes on a 5 tab sheet. Maybe there are specialized tools somewhere, but you don’t have time to go find them before the marketing meeting. You throw up your hands and start just working on it directly.
Long Horizon Intent: You get called into your Director’s office. “Hey, remember that webinar from April 2023? The one we did with record attendance? Can you have a look at that and figure out what went right there? I know you were here. I want you to come back with a webinar plan that actually reflects what we did well, because I was taking a look at the numbers an this last two years has just been a steady decline.” You wonder why he didn’t beef about it sooner, swear under your breath, and try to use AI to help you search the Notion, Jira, Slack, anything for context. You find fragmentary records here and there. Nothing that’s a narrative. So you go talk to an old-timer. Someone who has been here since 2019. She immediately recalls exactly what happened, and you complete your report successfully based on her input.
Adaptable Learning and Memory: You have an AI agent that’s supposed to learn your inbox patterns and respond intelligently in your voice. You find you’re constantly tweaking the system prompt that gets it to “write in your voice” and it never feels quite right. No matter how many emails of yours the agent reads, you don’t see it getting from 80% to 98% on your voice. Eventually you give up, because 80% good on an email you have to rewrite isn’t really saving you time.
These are all real problems. They aren’t easy to solve. And none of them get solved in immediately obvious ways by Reinforcement Learning (RL) and inference alone. So what’s left? Read on (and watch on) for a breakout of who’s sounding notes of caution about the inevitability of AGI, and a deeper discussion of the hard problems we still need to solve in order to truly unlock AGI. Spoiler alert: it’s not inevitable, and we will see it coming.
Listen to this episode with a 7-day free trial
Subscribe to Nate’s Substack to listen to this post and get 7 days of free access to the full post archives.