Why a post about China and the US? Why not stick to prompt packages, Nate?
Because it’s a risky world, AI belongs to all of us together, and I do believe conversations like this one can shift our norms and expectations about AI in ways that are meaningful.
I don’t know if policymakers read this, and I don’t care. I care if YOU read it. I care if you share it (yep, it’s free). I care if you’re able to have the language to frame up why it’s kind of useless to pretend that AI is going to trigger another Cold War.
It can’t and it won’t because (simply) the technology doesn’t support Cold War game theory. I’ll explain why below, and I won’t use words like game theory lol
It’s a short enough read you can dig in while the grill is heating up and you’ll have something to chat about to Uncle Fred when he comes over on his 3rd beer (we all have a version of Uncle Fred somewhere in the family and friends circle).
Dig in for an honest, nuanced examination of AI’s global implications and some grilled up practical proposals for balancing healthy competition with necessary cooperation. Not a ton of idealism here, just looking for practical solutions in a complex and risky world, and I think we can all play a part in finding those solutions, together.
Yep, this post is free for everybody, so enjoy and share around.
As a reminder, my weekly executive briefing comes out this Sunday exclusively for Executive Circle members—here’s how to upgrade your subscription plan if you aren’t in the crew yet.
And btw it is a fun crew! It’s not just a Sunday reads kind of deal either. Starting this July, we’re going to do monthly office hours for exclusively Executive Circle members. You’ll get a dedicated Q&A sessions where you can engage directly with me on the critical AI issues shaping our future. I’ll be getting the first one scheduled by mid-July, around the time ChatGPT-5 is rumored to come out, so should be a good conversation!
AI's Growing Pains: A July 4th Letter to Both Superpowers
It began to dawn on me when I was reading about DeepSeek's breakthrough—how Chinese researchers achieved GPT-4 performance with 90% less computing power—and I couldn't help but chuckle. Not because it was funny (ok, it was a little funny), but because it was so predictable.
Compute gets cheaper over time. Inference is a spreadable technology. The United States had tried to slow Chinese AI gains with export controls, and the result was a remarkable piece of CUDA engineering work that set a new bar in AI efficiency. Export controls for AI are like trying to dam a river and accidentally creating a more powerful rapids downstream.
Happy 4th, here we are talking about CUDA engineering lol
It feels like an appropriate time to talk about AI governance though. To risk the cheese factor just a little bit—just like 1776, we're at a moment where old frameworks are colliding with new realities. And some days it sort of feels like the way both Washington and Beijing are approaching AI is about the same way I approached sharing Legos with my brother as a kid (I was not good at this).
Digging up The Cold War Playbook
When you're faced with a transformative technology, you reach for what worked before. The Cold War gave an aura of certainty and structure to a complex global conflict. It made the awesome danger of the atom manageable (which might have been illusory but hey it seemed helpful at the time). Export controls, technology denial, massive retaliation—it all made sense to policymakers when the threat was uranium and ICBMs.
Now it feels increasingly like we’re back in the icebox of another Cold War, with both superpowers doing what feels natural: treating AI like it's plutonium.
But the tech is just different. Starting with the userbase: ChatGPT hit 100 million users in 60 days and is closing on 1B now. Nuclear weapons took decades to proliferate between countries and required state structures to operate(-ish). AI spreads at internet speed because it is the internet. It's built on the same infrastructure that reduced the cost of global cooperation to basically zero. While diplomats debate export controls, teenagers are downloading open-source models that can never be classified because the tech moves too fast.
There are 450,000+ AI models freely available on HuggingFace right now. We know it, but we don’t think of it in this context nearly enough. Nearly half a million AI models, just sitting there, available to be downloaded by anyone with an internet connection. American researchers and Chinese researchers publish papers together every single day—and in many cases they work together on these papers! I take it back. The knowledge flows faster than water. It flows like memes.
We Shouldn’t Have Been Surprised by DeepSeek
DeepSeek doing more with less is exactly what we should have expected. It's what humans do. To risk going back to that Cold War history well for a minute, when President Kennedy announced the moon mission, he didn't know how to get there, and neither did anyone else. The real innovation would come from people like Margaret Hamilton (who btw is a giant in the history of computing—you should absolutely read about her if you haven’t). Constraints drive creativity. Always have, always will.
The performance gap between US and Chinese AI has shrunk from almost 10% to less than 2% in under a year. Not despite US restrictions—because of them. Mary Meeker's research shows this beautifully. We don’t live in a world where one country will develop a singleton AI that dominates everything. The tech is proliferating too fast for that. Even Sam Altman doesn't believe that anymore. We're heading toward a world of many powerful AIs, each reflecting different values, serving different purposes.
And that just might be a scarier future to imagine than boring ol’ SkyNet, if you ask me.
Everyone Has Opinions
Big shocker I know. But part of why this Cold War thinking is growing like a weed is because everyone is leaning into the fears that come from a scarcity mindset. And AI is just too abundant to make a scarcity mindset rational. Intelligence is free and globally distributed, but most of us aren’t ready for it.
I read so many fears in the news from both sides of the Pacific.
AI will be used for surveillance.
What about the military applications?
What if we’re wrong and this is a great power competition?
What if economic futures are decided elsewhere because we’re too cooperative?
Both sides are right to be worried. Both sides are wrong to believe they can lean in to Cold War playbooks and expect the same results.
The Risks That Don't Check Passports
Remember where you were during 2008? I do. I remember watching financial risks cascade across markets in real time. No one knew where it would stop.
A misaligned AI would spread even faster. AI doesn't care if it was trained in Berkeley or Beijing. A cascading cyber attack launched by a misaligned AI wouldn’t stop at the Pacific Ocean. In a night mare scenario if a human jailbreaks an AI to manipulate financial markets or design novel pathogens, it won't check your nationality before deciding whether you're affected. (Note I’m leaning into the human cooperation angle here on purpose, because I think it is far more likely for a bit here and we might as well focus on the real risk.)
Chernobyl radiation didn't stop at the Soviet border. Swedish nuclear plant workers detected it first. The 2008 financial crisis started with American mortgages and nearly collapsed Iceland. Truly systemic risks don't respect sovereignty, and AI risks will cascade even faster.
Here's what haunts me: We're treating AI like it's a zero-sum game when it's actually a coordination game. In zero-sum games, my win is your loss. In coordination games, we both lose if we don't work together. Stable markets are a coordination game. Pandemic prevention is a coordination game. And whether we like it or not, AI safety is a coordination game.
A Framework for Cooperation
So what do we do? First, we need to be honest about where competition is going to naturally happen (like it did between my brother and me). Let me propose something practical—graduated engagement. Think of it like this: a healthy spectrum of commercial competition on applications, and very intentional cooperation on catastrophic risks.
Where we might compete: Software. Robots. Cars. Probably planes (the AI doesn’t fly the plane yet but I bet it can soon). Agents.
Where we must cooperate: Autonomous weapons proliferation. Nobody wins in a world where killer robots are cheap and everywhere. Biodefense protocols. An engineered pandemic doesn't care about your flag. Financial system stability. We learned in 2008 that financial contagion is real. Critical infrastructure protection. Because when the lights go out, everybody suffers.
And Yes There Are Practical Steps Forward
It’s not just pie-in-the-sky thinking.
1. Create an AI Hotline During the Cold War, we had a red phone. When things got tense, leaders could talk directly. We need the same thing for AI incidents. Imagine a AI leak accidentally flooding networks across the Pacific. Without a hotline, we might think it's an attack. With one, we prevent a hot conflict over a bug in the code.
2. Joint Risk Assessment Get the best AI scientists from both countries in a room—regularly—and have them list what keeps them up at night. Not the political stuff. The technical stuff. The "oh shit, the AI is doing something we didn't expect" stuff. Because catastrophic risk is worth everyone’s time.
3. Parallel Safety Standards We don't need identical systems. Think about airlines—American Airlines and Air China fly different routes with different service, but they follow the same basic safety protocols. Why? Because nobody wants planes falling out of the sky. Same principle applies to AI. Arxiv is a big step forward here—we’re all sharing what we learn as we go, and that keeps everyone safer.
4. Research Transparency Zones This one’s a big speculative, but what if we could create specific areas of AI research in geographically neutral spaces around the globe? AI safety, robustness testing, interpretability research—these benefit everyone and threaten no one. It's like medical researchers sharing data during a pandemic. Some things are too important to hoard, and it’s worth dedicating space to this. Is this too far afield? Not really. Some of the moves around sovereign AI in certain countries may well trend in this direction.
5. Third-Party Verification Switzerland is famous for its neutrality. Singapore manages to be friends with everyone. We could get neutral parties to verify AI safety claims where those claims are relevant globally. Is this idealistic? Not really. Honestly Arxiv isn’t a state but it sort of does some of this through peer verification now. I think the most idealistic part is honestly the idea that we could understand our own systems well enough to be able to describe them in ways that are useful for safety researchers to understand—Cold War thinking emphasizes secrecy, and tools like Arxiv show that sunlighting AI keeps everyone safer.
Why This Isn't Naive (I Can Hear the Skeptics Already)
"You want us to cooperate with [insert your superpower here]? Are you insane?"
No, I'm a parent. And I'd like to build a world for the kids that is less risky.
We traded grain with the Soviet Union during the Cold War! We built arms control treaties with people we were convinced wanted to destroy us. We created the International Space Station with the Russians after decades of space race competition. Somehow, when survival is on the line, humans figure out how to cooperate just enough to avoid extinction.
This isn't about singing Kumbaya. It's about being smart rivals instead of stupid ones. My brother and I compete on everything—from board games to sports to barbecue. But if his house caught fire, I'd be there with a hose. That's smart rivalry.
If We Pop Back to 1776 for a Second…
The founders who signed the Declaration 249 years ago were practical. They traded with enemies, made alliances with monarchs they despised, and built a system assuming everyone would try to accumulate power. The documents framed are mostly exhaustive compromise products with a little vision hanging the thing together (death by committee in Philadelphia).
We need some of that spirit with AI. Idealism about the future we want—a world where AI serves human flourishing. Realism about how we get there—through verified cooperation on narrow technical issues while competing everywhere else.
The real race isn't between American AI and Chinese AI. It's between controlled AI and uncontrolled AI. Between AI that serves humans and AI that serves itself. Between a future where our grandchildren thank us for being wise and one where they look at us and ask why in that sort of voice that makes you want to flinch.
Whether we admit it or not, we're all in this together. Intelligence is growing by leaps and bounds and proliferating faster than any government can manage. That’s why this is a conversation for every single one of us. We all have a seat at the table.
To stretch a metaphor a bit, humanity’s baby—AI—is growing up terrifyingly fast. We can raise it right, or we can let our rivalries create a monster.
I know which future I want. And I’m glad we live in a world where AI belongs to everyone by virtue of the internet itself. Happy 4th, and may those ChatGPT recipes work well for you!
Share this post