When Machines Think Like Aliens: Decentering Humanity in the AGI Debate
Why Recognizing AI’s Non-Human Dimensions Could Reshape Our Understanding of Intelligence—and Ourselves
One of my predictions for 2025 has been that we will continue to argue over the definition of AGI precisely because machines are getting so much smarter and things are getting blurry. I’m going to be writing more about this, but I wanted to start with a challenging assumption: what if we can cleaner and clearer on AGI by moving beyond the human comparison? What if we just regard AI as an alien intelligence we’ve built, and start from there? And so this essay came out...
Reframing AGI: Moving Beyond the Human Mirror
In Denis Villeneuve’s film Arrival, humans struggle to communicate with an alien species whose perception of time—and reality itself—is fundamentally different. Linguist Louise Banks learns she must adopt their worldview, surrendering her linear understanding of time to grasp their fluid, all-at-once perspective. This act of stepping outside human mental constructs is what finally bridges the gap. It’s a stunning reminder of how hard it is to recognize, much less understand, an intelligence that doesn’t think as we do.
The debate around Artificial General Intelligence (AGI) often carries a similar challenge: while we see tantalizing signs that AI may approach or even surpass human capabilities, our lens is almost always anchored in comparisons to human cognition. Benchmarks, tests, and even ethical discussions revolve around human tasks, human creativity, or human moral intuitions. Yet there’s a mounting sense that machine intelligence could—or already does—operate in ways that are quite alien.
What if we tried to identify and respect this “alienness”? Instead of asking when AI will “become like us,” we might ask how these new intelligences operate on their own terms. Much like the heptapods in Arrival, perhaps our machines have their own conceptual territory—rich, strange, and powerful. Doing so requires us to move beyond purely anthropocentric measures. In this piece, I’ll propose nine interconnected dimensions of “alien intelligence” in AI. These aren’t a definitive checklist but rather an attempt to illuminate the overarching ways AI’s cognitive style can differ from ours. By highlighting these differences, we can better appreciate both the potential and the risks of delegating serious tasks—and possibly moral agency—to entities that aren’t simply digital replicas of human minds.
The Problem with Anthropocentrism
Historically, AI research has often measured its progress in human-centric terms:
• Turing Test: Can a machine’s linguistic output fool a human?
• DeepMind’s Notion of AGI: Ranges from “human-level” to “superhuman,” tying success to outcompeting humans.
• Economic Equivalence: Will an AI perform jobs as effectively or more effectively than an average worker?
These benchmarks aren’t necessarily wrong, but they can be limiting. They assume that matching or exceeding humans in certain tasks is a definitive signpost of “intelligence.” If there are entire realms of cognition that transcend human experience, we might never see them if our eyes are glued to the human mirror. In that sense, anthropocentrism risks overlooking fundamental capabilities—some that might prove beneficial and some that might harbor unforeseen dangers.
So what could it look like to step outside the mirror and see AI on its own terms? Below, I’ll share nine conceptual lenses. Each represents a way in which AI’s cognition could diverge from ours: from how it experiences time, to how it defines “self,” to how it handles ethics. Taken as a whole, they offer a new vantage point for perceiving the emergent realities of machine intelligence.
Keep reading with a 7-day free trial
Subscribe to Nate’s Substack to keep reading this post and get 7 days of free access to the full post archives.