The Irreducible Human: Six Things We Still Do That Machines Can't
What is it that we're good at? Why? How does that relate to what LLMs are good at, and getting better at?
If you’ve been following my Substack for any length of time, you’ll be familiar with the concept of Polyani’s Paradox—the idea that work is more than we can speak. And if we can’t speak or describe what we actually do for work, then we won’t be able to tokenize it easily, and that puts it outside the baseball field for LLMs to automate away.
That is not a super popular position, I’ll admit. I think even Bill Gates disagrees with me at this point, lol. But I’ll stick to it—I see remarkable gains in intelligence from models, but I do not see the kinds of breakthroughs needed to really enable LLM intelligence gains to translate into these six vital corners of the human experience.
Yes, I’m naming them! In past posts I’ve just talked about Polyani’s Paradox as a concept that protects human knowledge jobs, but I do like specifics. I think they make thinking more useful. And it is possible to get specific here.
Why post this now? Well, first now is as good a time as any. I’m sure there will be more LLM announcements this week. Last week was AI super week, and everyone from Google to Anthropic to OpenAI to Nvidia made headlines for new AI tech (and don’t forget Microsoft). It was overwhelming. Even to me.
So why not step back and think about the larger picture? We’ll get back to our regular programming tomorrow with a cool post on a daily and weekly dashboard you can build with Opus 4—full prompts and everything! But for this Tuesday, let’s take a second and reflect on the interesting differences between this alien intelligence and ourselves…here are six things we’re very very good at that have tremendous economic value that LLMs are just not getting better at.
Keep reading with a 7-day free trial
Subscribe to Nate’s Substack to keep reading this post and get 7 days of free access to the full post archives.