This one is personal. In response to one of the most common questions I get with AI: Nate, do you think the world is going to end with AI? Why not?
My friends who ask this aren’t kidding, and there is usually a careworn look about them. They’ve heard about AI doomsday scenarios, and the idea is a load for them. Something they’ve been carrying for awhile and want to put down. The want to ask someone else who they think will have the answer. Somehow, they decide to entrust their worries to me.
I do my best. This is more or less what I tell them. And I decided to write it down here as well.
This one just felt like it needed to be free and open for conversation. I hope it helps you talk to folks in your life who have these concerns too.
A Letter to My Friends Who Think AI Will End the World
Hello!
I understand you think the world might end because of AI, and you want to know if I think that’s true.
Honestly, it makes sense to worry right now! A lot of very smart people are saying they think AI is going to doom us very loudly these days (it’s literally known as p(doom) with the parantheses because it means probability of doom). I also read the essays.
I want to spend this letter explaining how I can still sleep fine at night (yes I do sleep), and why I’m not more worried than the average parent about my kiddos.
And as much as this is a heartfelt letter, it’s going to be a wee bit technical and a wee bit academic, because part of why I’m not worried is technical and academic. I’ll keep it explainable, I promise!
The first thing I want to talk about is the idea of discontinuity:
Fundamentally what calls itself the p(doom) movement assumes the exponential change along a particular dimension of machine intelligence necessarily equals exponential change across a wide range of risk factors. There’s a famous essay called AI 2027 that’s been widely read that makes this point in chilling fashion.
It was written a few months ago and I already see signs we’re diverging from their expected timeline. Agent Mode is a great example of AI not improving at the pace it would need to improve to hit these kinds of scary timelines.
I’m not surprised. Because the predictions of the p(doom) community are discontinuous with what I actually observe about AI. In other words, the risk posed is not provably wrong, because it doesn’t actually connect very clearly with the technology in front of us today.
I studied a little bit of anthropology in college, and strong p(doom) proponents articulate a p(doom) belief system that reminds me a lot of religious belief. I’m not the only one who has noticed that.
Like religion, it’s not provable either way, although p(doom) proponents will point to a bunch of circumstantial indicators they read in support of their beliefs.
Like religion, the belief structure seems to be tightly bound to a sense of identity.
Like religion, it has opinions about the end of the world that will only be verifiable after the fact.
Finally, like religion p(doom) imposes a set of behavioral expectations in light of the presumed imminent end of the world. That’s outside the scope of this note!
So much for the anthropology pit stop. You may or may not agree, and that’s ok, but I hope it helps you get perhaps a different frame on the idea complex that is p(doom).
Let’s move on. Why do I not see us on this trajectory? What are the key elements of discontinuity I observe between p(doom) expectations and AI reality?
Skin in the game: There is absolutely zero sign of progress on AI developing a sense of ownership akin to the English phrase “skin in the game”—by which we mean a tangible embodied sense of ownership. This is important, because a lot of human dominance behaviors are tied to a sense of imminent loss, which is associated with having skin in the game. AI doesn’t have this. This is part of why Claudius was terrible at vending machine management.
Objection: AI’s have already been caught deceiving, doesn’t that mean they have skin in the game?
No. It means they know how to play Diplomacy and study game theory. It means when placed in artificial scenarios they behave in artificial ways. It means they have been trained through Reinforcement Learning to prefer the continuity of helpfulness, and I think we are mistaking that for something like lying to enhance self-preservation.
I will let model makers figure out how to properly balance reinforcement learning techniques here, but fundamentally preferring to continue helping is not really equivalent to the visceral sense of ownership implied by humans when they fear imminent loss of something that’s important to them.
Longterm context: Deep human goals and territorial ambitions (which p(doom) advocates wrap onto AI) require longterm context and awareness. The study of European warfare between 1000 and the Treaty of Westphalia in 1648 is an example of this. (Take a breath we’re done with the dates.) That treaty established the modern state system and helped to (very slowly) establish peace in Europe. The 650 year period between those dates is a tremendous example of longterm contextual awareness driving territorial ambition, as we see story after story of longterm planning, revenge, and territorial ambition driving wars across Europe. That’s exactly the kind of dominance seeking behavior AI is supposed to exhibit to doom us all. The problem is not just that AI doesn’t have longterm context to enable this kind of violent behavior, it’s that we don’t know how to even solve the memory problems needed to give AI this capability.
Objection: They’re working on AI memory now. ChatGPT has memory now. It’s just going to get better. We see AI longterm task intent doubling every few months. Nate, you’re just unreasonably anchored on the present!
No. We are not making real progress on any kind of longterm context. Getting a model to burn very high token counts is 1) extremely expensive, and 2) is still tied to the same set of narrow policy guardrails that enabled the agentic task in the first place. When Claude Code goes for hours (as it does occasionally), it’s not going extra long because it has more memory. It’s been given a task and it solves it. Time is relevant to us, not to Claude Code.
And ChatGPT memory is a set of a few text boxes you can edit in the UI. The difference between that and true longterm context is not an incremental step. Not even an incremental leap. It’s the Grand Canyon. It’s not something that’s going to be magically solved partly because the memory problem is a physical atoms problem—where do we get the chips, what does memory encoding at scale look like. Are people working on this? Yes, notably OpenAI. Are they showing the kind of progress p(doom) would require? Not remotely. The model amnesia isn’t going to stop anytime soon.
Proactive general agentic intent: AI is not developing useful proactive intent. It is in fact extremely difficult to develop useful proactive intent for specific tasks, let along useful proactive general agentic intent. Agents today are agents you tell tasks to, and they (hopefully) do them. Agents that drive their own destiny may be entertaining, but they are not general purpose (e.g. the infamous example of goatseus maximus is an example of a running chatbot with a rolling chat window, amnesia, and no general capabilities). Without proactive general agentic intent, it is nearly impossible to develop military strategy, and p(doom) absolutely requires military strategic intent from AI to work.
Objection: Nate, you’re reasoning from the present again. We now have AI that can go and use tools to make powerpoints. That was impossible two weeks ago. Stop assuming general intent is so hard to develop.
No. For one thing, let’s not mistake technical capacity for a meaningful step change in competence (see my writeup). But more importantly, the ability to use tools is not the same as the ability to develop proactive general agentic intent. The definition of an agent is an LLM + tools + guidance. It needs the direction. And experiments like Truth Terminal show exactly what happens when LLMs don’t have that guidance. The AI spirals, develops extremely clever memes, but at no point displays useful intent.
These gaps are chained, so this is an excellent time to point you back to #2 Longterm Context if you are worried about deception and faking intent. TLDR; deception is only militarily valuable when an agent has skin in the game, longterm general agentic intent, and a sense of longterm context. Playing games of Diplomacy when instructed to do so isn’t the same thing.
So there are important discontinuities between present state and the trajectory we need to be on to get to a p(doom) scenario. And critically, nobody really knows how to solve them, partly because they require novel solutions that are outside the core transformer architectures that enabled LLMs in the first place.
The best bet proponents of Superintelligence have in the nearterm is likely the ability of an LLM to learn on the go after training. Proponents of doom would argue that is enough to get you all three of the above. Proponents of superintelligence would agree and argue that is enough to get you a superintelligent entity, which they say is good.
I think I would argue neither is true. Just being able to learn as you go doesn’t inherently solve problems that don’t depend on more learning (including all three of the above, including longterm context—learning doesn’t solve memory, it makes it worse).
Quick sidebar: If you go for the economic doom version of doomsday and this military stuff isn’t doing it for you, let me just point out that the three discontinuities I called out above also need to be solved to enable an AI to deliver the kind of doomsday people like to worry about. At present there’s no evidence in macro economic data whatsoever that AI is impacting employment. And I’m not the only one saying that. Disruption will happen, but right now the disruption of AI looks a lot like other economic technological changes, not something new.
Anyway, this letter has nearly gone on long enough. There’s one other thing I want to talk about. The idea of bet sizing.
In poker and stock markets, the entire game is sizing your bets correctly given a particular hand or market opportunity. Do it wrong, and you go bankrupt. Do it right and you buy the private jet or get kicked out of Vegas, whichever you prefer.
The arguments about whether we are doomed are implicit arguments about bet sizing for AI, and I want to stop hiding that claim in the weeds and bring it into the sunshine where we can look at it properly.
Because if we do, I think many of us would ask ourselves if the bet was sized right.
Here’s the p(doom) bet: There is (x) non-zero risk of humanity going extinct due to a hostile AI. It doesn’t matter what (x) is, if it is non-zero it is too high and it should take all (or an arbitrarily large number) of our resources to at least gives ourselves the chance to fix it. (There are flavors here—not everyone wants to fix it, but that’s another conversation.)
(This is exactly the same argument that pushed Elon to develop Starship—except his focus was on asteroid resilience.)
The problem with bet sizing is this: p(doom) is an inherently unverifiable claim. We’re being asked to agree with the correctness of the claim because it is not provably zero, and therefore divert a huge resource share off of provably extant risks with measurable occurrences in order to prevent this supposedly existential risk.
There is no such thing as a free lunch, and there is no such thing as a free risk. Apparently in Ukraine the equivalent saying is “there is no free cheese except in the mouse trap” which is much more metal. The point is that as a society we have to place our bets. And right now we are under-betting on other provable AI risks.
Let me list a few I think we should invest at least 10x more heavily in addressing:
AI fraud risk education and prevention for seniors
AI education, particularly around critical thinking, for students
AI usage norms, a common code of etiquette around AI usage
AI deepfakes and identity protection
I can hear you saying that these risks are not really tradeoffs, and it’s not fair to p(doom) advocates to pretend they are. I strongly disagree. Airtime and attention are very scarce.
We are spending a tremendous amount of public attention scaring people about catastrophic risks they cannot control, poisoning public attitudes toward AI in the process, and saying very little about the materializing AI risks we can control. There is only so much attention to go around.
I would sleep better at night if we took those risks more seriously. But these are not the risks that most people worry about with AI. These are not the doomsday risks.
Look, I'm not saying AI isn't disruptive or that we won't face new challenges. I work in this field—I see how powerful these systems are becoming. But I also see their limitations up close.
What helps me sleep at night is looking at the actual trajectory of AI development versus the theoretical scenarios that require everything to change all at once. What helps me is seeing how we've consistently managed to adapt to disruptive technologies throughout history.
Most importantly, what helps me is focusing on the problems we can actually solve today instead of getting paralyzed by theoretical problems we might face tomorrow.
My kids are going to grow up in a world with AI, but I believe it's going to be a world where we've figured out how to make AI helpful while managing its risks—just like we've done with every other powerful technology.
The future is going to be different, but it doesn't have to be dark. And right now, based on everything I can see, we're not on a path to the apocalypse. We're on a path to something more like every other technological revolution: messy, disruptive, but ultimately manageable if we focus on the real challenges instead of the imaginary ones.
Let's talk about the risks we face today. Let's work on fixing those. That's a much more productive use of our time than worrying about scenarios that, frankly, we're not on track to hit.
Your friend who refuses to panic,
Share this post