23 Comments
User's avatar
Dennis Doyle's avatar

This is a compelling and well-grounded argument for cooperation between superpowers. The Cold War analogies work, and I agree: smart rivalry is essential if we want to avoid catastrophic outcomes.

But I keep circling back to this question: how do these ideas address rogue actors?

Hotlines, joint risk assessments, transparency zones—these all depend on mutual interest and a willingness to cooperate. But what about nations or non-state groups that benefit from chaos? What happens when AI tools fall into the hands of actors with no interest in stability, norms, or long-term survival?

That’s where the Cold War analogy starts to fray. Nuclear weapons were hard to build, centralized, and relatively trackable. But AI? It’s diffuse. The models are everywhere. The tools are increasingly open-source. The barriers to entry are crumbling.

We’re not dealing with controlled materials—we’re dealing with intelligence itself, self-replicating and fast-moving. So yes, we need the hotline. But we also need tools for when no one’s picking up the phone.

One practical step: we should be investing in traceability and attribution—digital watermarking, usage auditing, and model “flight recorders” that can help identify where a system came from, what shaped it, and who is deploying it. Think of it as the AI equivalent of serial numbers and black boxes. Without that layer of accountability, we may not even know who is causing harm, let alone how to stop them.

Superpower cooperation is necessary—but it’s not enough. Not in a world where destabilizing actors don’t ask permission.

So I’ll ask directly: What are your antidotes to rogue deployment? To intentional misuse? To actors who want destabilization?

Expand full comment
Nate's avatar

I think that’s a fair question, but I think we have to start from the technical reality that AI proliferates at the speed of the internet. It’s not atoms, it’s bits. Rogue actors will therefore have access to what they need. Inevitably. The question becomes whether we have resilience measures in place that anticipate that.

Expand full comment
Sandro Pisani's avatar

Brilliant as always.

Expand full comment
Nate's avatar

Thanks!

Expand full comment
Gregor Bingham's avatar

I do wish Ian Banks was still with us. I think he framed one of the more positive versions of AI through The Culture, especially in The Player of Games - where the AI was sufficiently god-like, but choose not to play God by knowing it couldn't forsee the future and had to be very wary about 'meddlling' with other cultures. I think Ian would have much to help us imagine.

Expand full comment
Nate's avatar

ooh love this. Good reflection! I miss Douglas Adams as well. He would have enjoyed the 2020's so much.

Expand full comment
Gregor Bingham's avatar

Yes, so let's hope we find ourselves too soon at The Restaurant at the end of the Universe. I am sure the Voggons are nearby plotting the highway about now... :)

Expand full comment
Nate's avatar

I have no doubt lol

Expand full comment
Gary Cedar's avatar

You nailed it Professor!!! Through these innovative AI research-transparency hubs, we can establish shared competitive-cooperative principles and standards surrounding AI… particularly in the context of creating the “very real possibility” of a NEW 1000+ year cooperative-AI global economics model. I believe time is now to establish the new frameworks that will govern a healthy-sustainable-competitive global marketplace into the future.

My first job at 19 was as a ‘civilian’ research scientist/engineer for the US Corps of Engineers during the Cold War under Reagan. I was located in national headquarters and a part of the ICBM nuclear weapons systems design/testing team between 1981-1984.

The MAD war strategy led the superpowers to develop nuclear proliferation treaties around shared security/threat principles… as opposed to continuing competing/developing these systems, creating further systemic risks to the planet, ecosystems, and all biological life.

The way you are thinking now is exactly how the senior Corp leadership have been thinking, especially over the past year as AI has scaled rapidly. It’s the Corps job to build/secure/maintain our US infrastructure, as well as to cooperate with other governments to ensure real-time collaboration on security, monitoring, threat protections, and mitigation.

As you state, we need to start architecting and designing the shared-global infrastructure now that will drive the adoption of trusted-ethical AI into the future.

This is a once-in-a-planet opportunity, and it’s crazy that we get to experience it. I’m an AI Optimist - so let’s start architecting this new global AI competitive-cooperative model NOW with all of the stakeholders at the table, so we can truly start building this exciting new AI-AGI future and move humanity forward 1000x, lol!

Happy July 4th!!! ✨🇺🇸✨🚀✨

Expand full comment
Nate's avatar

I’d really like to see it. I think particularly as human-human interaction rises in value having in person events like that will be useful

Expand full comment
Gary Cedar's avatar

I began thinking and drafting plans/designs regarding this topic in 2005 while developing a global protocol for a specific international business sector, in conjunction with the launch of the iPhone… so I will start pulling out some of my old drafts (now that you’ve raised this most-important topic) and start sending a few your way!

Expand full comment
A.J. Cave's avatar

Nate, what you're proposing is a cross between the process of technical software standardization plus the geopolitical trust-building of arms control.

Expand full comment
Nate's avatar

well there’s a reason I have books on both in the library lol

Expand full comment
A.J. Cave's avatar

I’ve done this in the mobile space. At the time, beyond the usual players, the critical member we needed was China Mobile.

You'll need the following to come to the table:

1. Major Global Tech Companies (HW/SW)

2. Key Infrastructure & Cloud Providers

3. International Standards & Governance Bodies

4. Government & Public Sector Representatives

5. Leading Universities and Research Labs

6. Civil Society & Nonprofits

Expand full comment
Gary Cedar's avatar

LOL!!! ✨🇺🇸✨🚀✨

Expand full comment
Pedro Sr. Luzuriaga's avatar

OK, very honorable and all wishing perspective. My comment is that to achieve desired goals with AI help we must first identify the elephant in the room. Coins have two sides, arguments have two sides, disagreements have to sides, and of course, since the first pair of human populated earth good and evil became the evident curse. Always 2 sides. So, can AI build the road map taking this into consideration. You da man, Happy 4th of July from this immigrant lol.

Expand full comment
Nate's avatar

Happy 4th!

Expand full comment
Duane Stiller's avatar

Nate, well said. Co-opitition.

Expand full comment
Kathy  Utley's avatar

This got a restack!

Expand full comment
Nate's avatar

thanks Kathy!

Expand full comment
Kathy  Utley's avatar

You are welcome and thank you for posting great content.

Expand full comment
RW's avatar

I don't see the military application of LLMs. Are we going to bore our enemies to death by writing thousand pages novels. The problem is the Dinosaurs in Congress and the humanities majors in think-tanks have no clue about how technology works, so they label anything magical they don't understand as a "national security" threat, which is hilarious and dumb, and backfiring on them.

Expand full comment
CJ Pulido's avatar

I love this and I know as a species we are going to just spiral towards Deterrence with Mutual Assured AI Malfunction (MAIM). Its sad. I hope I’m wrong and we don’t see something catastrophic before nations are forced to the table to collaborate.

Expand full comment