0:00
/
0:00
0:00
/
0:00
Preview

Two founders, two safety theories, two products—and a framework for knowing which one matches your risk tolerance

My honest take on the OpenAI/Anthropic split and the continued case for using different AI tools for different stakes.

The AI discourse is stuck on a question that stopped being the useful one.

Every week, someone publishes another “Claude vs. ChatGPT” comparison. Another benchmark breakdown. Another “which one should you use?” guide that treats these products like competing brands of the same thing—Coke vs. Pepsi, but for language models.

This framing made sense in 2023. It’s noise now. These companies aren’t competing on the same axis anymore. They’re building for different customers, solving different problems, optimizing for different outcomes. Asking which is “better” is like asking whether a scalpel or a fire hose is the superior tool. Depends what you’re trying to do.

But here’s what most people miss: this divergence wasn’t a strategic pivot or a market accident. It emerged from two founders with fundamentally different theories about how progress happens—and more critically, how safety is achieved. Those theories, pressure-tested by competition and governance crises, produced two organizations that now serve completely different markets.

Here’s what’s inside:

  • The origin stories that explain the fork. How a physicist’s loss and an entrepreneur’s failed startup created two incompatible philosophies about when to ship and when to wait.

  • The safety debate you’re not hearing. The real disagreement isn’t cautious vs. reckless. It’s two coherent theories about how you make AI safe—both with intelligent defenders, both shaping what these tools can and can’t do reliably.

  • Two economies, two playbooks. How to identify which AI world your work lives in, which tool to reach for, and what risk profile you’re actually accepting when you choose.

Start with the founders. Everything else follows.

Subscribers get all posts like these!

Listen to this episode with a 7-day free trial

Subscribe to Nate’s Substack to listen to this post and get 7 days of free access to the full post archives.