I think I see taste as one of the great differentiators of our age.
I think most of the writing on it boils down to: “hey it’s good to have taste!"
That’s not helpful, and I wanted to write something more useful that gets at why I think taste matters, how you cultivate taste at various points in your career, and how taste evolves as we work with smarter and smarter agents.
There’s a bit in here too on what it means to be human, teaching our kids about taste, and the evolving nature of work in the age of AI. Enjoy!
I've been thinking about taste lately, and not in the way you'd expect. Not the kind where someone knows which fork to use at dinner or can pronounce the wine correctly. I mean the gut feeling you get when something's off in a piece of work, even when you can't quite articulate why. That instinct that makes you say "this could be better" without necessarily knowing how to fix it yet.
Here's what I've noticed: as AI gets better at producing the mechanics of work—the spreadsheets, the documents, the code—taste becomes our differentiator. It's what's left when the grunt work disappears. And I mean really disappears. When Claude can build a working financial model in one shot, when GPT-5 Pro can reason through complex problems for minutes at a time, what exactly is our job?
The answer isn't comfortable, but it's true: our job is to know when the output isn't quite right. To recognize that hollow feeling in AI-generated work, that sense that something's missing even when all the pieces are technically correct. This isn't about being a snob. It's about having accumulated enough experience in a particular domain that you develop strong opinions about what good looks like.
Think about it this way. We're all flexible tool users—that's maybe the best definition of humanity I've encountered. We adapt to new tools constantly. But right now, we're adapting to tools that are gaining intelligence on a timeline of months, not decades. GPT-5 Pro genuinely outthinks me in specific domains. I'm not embarrassed to admit that. The model can hold more context, reason through more permutations, and generate more options than I can. But I still know when its output feels wrong for the situation.
This creates an interesting dynamic. We're simultaneously dealing with tools that are smarter than us in some ways while being utterly dependent on us for direction. They need our taste to guide them. Without it, they produce technically correct but contextually hollow work. They miss the nuances that come from living in the world, from having skin in the game, from understanding consequences beyond the immediate problem.
Where taste comes from
The thing about taste is that it's embodied. AI doesn't live through decades of awkward teenage years, doesn't sit through endless meetings where the real decision happened in the hallway afterward, doesn't learn to read when someone's "yes" actually means "I'll think about it." These models are brilliant, but they're disembodied intelligence. They've never had to metabolize failure, never felt the particular tension in a room when a project is about to go sideways, never developed that sixth sense for when the numbers look right but something's still wrong.
This embodied experience creates a kind of pattern recognition that transcends pure intelligence. You know how sometimes you can tell a meeting is going badly even though everyone's saying the right words? That's embodied taste. You've sat through enough meetings to recognize the subtle shift in energy, the way people start checking their phones, the forced enthusiasm that means we're all just going through the motions. An AI can read the transcript perfectly and miss everything that actually happened.
I think about this as the compost pile of the mind. All your experiences—the failures, the successes, the near-misses—decompose slowly in your brain, mixing together, breaking down into something richer. When you need to make a judgment call, you're not consciously reviewing every similar situation you've encountered. You're drawing on this composted mixture that has become instinct. The AI doesn't have a compost pile. It has training data, but that's not the same thing as lived experience slowly decomposing into wisdom.
Here's what's interesting though: you don't need decades to develop taste. I've seen people just starting their careers who have incredible taste in specific domains because they're obsessed. They've gone deep on one particular corner of the world, and they know it better than people with thirty years of general experience. They have taste not because they've been around forever, but because they care intensely about getting this one thing right. That caring, that obsession, fast-tracks the development of taste in ways that time alone never could.
This matters because it means taste isn't just for gray hairs reflecting on long careers. If you're twenty-two and you've spent the last three years obsessing over video game mechanics or TikTok aesthetics or supply chain optimization, you have taste in that domain. Real taste. The kind that can spot what's wrong with an AI's output even when the AI is technically smarter than you.
Sidebar: For those starting their careers
If you're early in your career, here's the counterintuitive truth: you might have an advantage in developing AI-era taste. You don't have decades of muscle memory around outdated workflows. You're not attached to the way things used to be done.
Pick a corner of the world and go deep. Really deep. It doesn't matter if it's "professional" or not. Maybe you know everything about a particular game's mechanics, or you've spent years understanding a specific online community, or you're obsessed with a narrow technical problem. That obsession creates taste faster than broad experience ever could.
When you work with AI, be demanding. Don't accept the first output just because it seems smarter than what you could produce. You know your corner better than any model. Push back. Say "this part doesn't feel right" even if you can't fully articulate why. Your fresh eyes combined with deep domain knowledge is a powerful combination.
The key is to position yourself as the taste layer for your specific domain. You're not competing on years of experience. You're competing on depth of understanding in areas that matter right now. That's a recipe for rapid career growth in an AI world.
The practical mechanics of taste
So what does exercising taste actually look like in practice? It's not abstract judgment from on high. It's specific, detailed feedback. It's telling Claude, "Your phrasing is overdramatic here." It's catching that "two of these numbers are made up, the other sixteen are not." It's saying, "Please tell me when you need more information instead of guessing." These aren't philosophical critiques. They're practical interventions based on knowing what good looks like.
I see people respond to AI in two ways, and both are mistakes. Either they become overly deferential—the model said it, so it must be right—or they reject it entirely at the first error. What we need instead is the confidence to engage with these models as collaborators while maintaining our editorial authority. To say "I see where you got that, and it's technically correct, but it misses something important about this situation."
The new dynamic with something like GPT-5 Pro is particularly interesting. It's becoming more like consulting an oracle than having a chat. You prepare your prompt carefully, you wait minutes for the response, and then you have to interpret what comes back. This isn't instant messaging anymore. It's something more ritualistic, more considered. And that changes how taste operates. You're not just editing on the fly; you're interpreting sacred texts that took significant compute to generate.
But here's the key: even when the oracle is smarter than you in raw processing power, you still need taste to evaluate its pronouncements. I've looked at GPT-5 Pro outputs and thought, "This is brilliant strategic analysis given what I told you, but what I'm learning is that I didn't give you the context you needed." That's taste in action—not rejecting the intelligence of the response, but recognizing that intelligence without context is hollow.
The tactical application of taste is surprisingly specific. It's not enough to say "make it better." You need to articulate what better means in this context. Does it mean more concrete examples? Less corporate speak? More awareness of political dynamics? The clearer you can be about what your taste is detecting, the better you can guide these systems. And they need guiding. They're brilliant students who've never left the library.
The anxiety of shifting value
Let's talk about why this shift creates such profound anxiety. People built entire careers on skills that are suddenly commoditized. That technical project manager who could wrangle 200 stakeholders, keep everyone aligned, and ship on time? AI can now manage complex dependencies, generate status updates, and identify blockers. The knowledge workers who derived their value from typing fast and processing information? ChatGPT types faster and processes more.
This isn't just about job security. It's about identity. If you've spent twenty years becoming excellent at something, and suddenly a model can do it in seconds, what are you? Who are you? The answer isn't comfortable: you're someone whose value now lies in judgment rather than execution, in knowing what to do rather than doing it, in recognizing quality rather than producing quantity.
The model makers understand this shift, and they're playing a different game than you might realize. They're not just building tools; they're trying to capture your work time the way Facebook captured your social time and TikTok captured your attention. Claude wants you thinking in Claude, generating artifacts in Claude, collaborating in Claude. OpenAI wants you in ChatGPT. They're building ecosystems designed to hold your attention and, more importantly, to hold your work process.
This isn't necessarily sinister, but it's important to recognize. These companies are betting that if they can capture enough of your workflow, they become indispensable. And the more of your work happens inside their systems, the more your taste becomes the only thing you're contributing. You become the editor-in-chief of AI output, the quality control for artificial intelligence, the human taste layer on top of machine generation.
The stress comes from how fast this is happening. Models are gaining intelligence on a timeline of months. Every time you figure out how to work with one level of capability, a new model drops that changes the game. It's exhausting. It's like learning to dance while the music keeps changing tempo.
Sidebar: For senior professionals
If you're deep into your career, this shift might feel like betrayal. You spent decades accumulating expertise, and now a model can replicate much of it instantly. But here's what the model can't replicate: your understanding of consequences.
You've seen projects fail for non-obvious reasons. You know why that technically perfect solution crashed and burned when it hit organizational reality. You understand the unwritten rules, the political dynamics, the human factors that never make it into documentation. That's irreplaceable.
Your task isn't to compete with AI on execution—you'll lose. Your task is to become the wisdom layer. To know which problems are worth solving, which stakeholders actually matter, which metrics are gamed and which reflect reality. You're not the project manager who herds 200 stakeholders anymore; you're the person who knows which five stakeholders actually make the decisions.
The transition is this: stop defining yourself by what you can produce and start defining yourself by what you can prevent, what you can direct, what you can recognize as important versus urgent. Your taste, built over decades, is about understanding second and third-order effects that no model can anticipate because they've never lived through the consequences.
Transform your experience into editorial judgment. Become the person who can look at an AI's technically perfect project plan and say, "This will fail in week three because you don't understand how procurement actually works here." That knowledge—embodied, contextual, consequence-aware—is your competitive advantage.
The evolution and cultivation of taste
Here's something crucial: taste isn't fixed. What I cared about in 2000 is unrecognizable from what matters to me now. The skills I thought were permanent turned out to be temporary. The expertise I accumulated in specific tools or platforms became irrelevant when those platforms died or evolved. But the meta-skill—the ability to develop taste in new domains—that persists.
This is liberating once you understand it. You're not trying to defend a fixed position. You're not clinging to expertise that's being eroded. You're cultivating the ability to develop taste in whatever domain becomes important. It's like learning to learn, but more specific: learning to develop strong opinions based on accumulated experience, even when the domain keeps shifting.
The cultivation of taste is an active process. You can accelerate it. Pay attention to when something feels off, even if you can't articulate why. Notice patterns in what works and what doesn't. Build your own compost pile of experiences. And most importantly, be willing to have opinions. Not uninformed opinions, but opinions based on pattern recognition that you might not be able to fully explain.
Fantasy football players do this naturally. They develop strong opinions about draft order and player selection based on pattern recognition from seasons of play. They have taste in that domain. The same person might have no taste in wine or art, but they can tell you exactly why taking that running back in the third round is a mistake. That's domain-specific taste, and it's valuable.
The trick is recognizing that professional taste works the same way. You develop it through repetition, through caring about outcomes, through being willing to say "this isn't good enough" and then figuring out why. And increasingly, you develop it through interaction with AI—through hundreds of conversations where you learn to recognize what the model does well and where it falls short.
Sidebar: Teaching kids about AI and taste
If you're a parent or teacher, you're facing a unique challenge: preparing kids for a world where AI can do much of what we traditionally taught them to do. Here's the shift: stop emphasizing execution and start emphasizing judgment.
When kids use AI for homework, don't just check if they got the right answer. Ask them: "Does this feel right to you? What would you change? What's missing?" Teach them to be editors, not just producers. Help them develop the confidence to push back on AI outputs, to say "this doesn't sound like me" or "this misses the point."
Encourage obsessions. If your kid is deeply into Minecraft redstone circuits or K-pop choreography or ancient Egypt, that's them developing taste. They're learning to distinguish good from great in a specific domain. That skill—the ability to develop strong opinions through deep engagement—transfers to whatever they'll need to do professionally.
Most importantly, teach them that working with AI isn't cheating; it's collaboration. But collaboration requires maintaining your own standards, your own sense of what's good. Help them understand that the human role isn't to compete with AI on computation but to provide the wisdom about what computations matter.
The goal isn't to shield kids from AI or to make them dependent on it. It's to help them develop the taste to use it well—to know when it's helping and when it's leading them astray, to maintain their own voice and standards even when a model can generate infinite alternatives.
Working with intelligence that surpasses your own
This brings us to the weirdest part of our current moment: we're collaborating with intelligence that surpasses our own in specific dimensions. GPT-5 Pro can hold more context than me, reason through more permutations, generate more options. In certain types of problems, it's simply smarter than I am. This is new for humanity. We've never had to manage intelligence that exceeds our own.
But here's what I've learned: superior intelligence without taste is like a powerful engine without a steering wheel. The model can generate brilliant solutions to the wrong problems. It can produce technically perfect work that's politically impossible. It can optimize for metrics that don't actually matter. And it doesn't know it's doing any of these things because it lacks the embodied experience to recognize the difference.
This is why taste matters more as models get smarter, not less. When GPT-6 arrives—and it will—it won't eliminate the need for human judgment. It will amplify it. The smarter these systems get, the more important it becomes to have someone who can say, "Yes, that's brilliant, but it's solving the wrong problem." Or "That's technically perfect, but it will never work here because you don't understand the politics."
I think of it as being the editor of a brilliant but alien intelligence. The AI is the brilliant writer who's never left their room. They've read everything, they can connect ideas in ways you never could, they can generate options faster than you can read them. But they don't know what it's like to present to a hostile board, to navigate office politics, to recognize when a technically correct solution will create more problems than it solves.
Your taste—your embodied, experienced, composted understanding of how the world actually works—becomes the bridge between artificial intelligence and useful outcomes. You're not competing with the model. You're providing what it cannot: the wisdom that comes from consequences, from skin in the game, from having been wrong before and learned from it.
The new shape of work
We're moving into a world where insisting on quality, on usefulness, on contextual appropriateness becomes the job itself. Where having strong opinions rooted in experience becomes more valuable than the ability to execute tasks. Where taste—that ineffable sense of what's right and what's not—becomes the skill that matters most.
This changes everything about how we think about work. The old model was: learn skills, execute tasks, produce output. The new model is: develop taste, guide intelligence, ensure quality. It's a shift from being the person who does the work to being the person who knows what work should be done and whether it's good enough.
Some people find this depressing. They liked being the person who could execute, who could produce, who could make things happen through sheer effort. But I find it liberating. We're being freed from the mechanical parts of knowledge work to focus on what humans have always been best at: making judgments based on incomplete information, understanding context and consequences, knowing when something feels off even if we can't fully explain why.
The practical reality is that we're all going to become editors. Not in the narrow sense of fixing grammar, but in the broader sense of knowing what's good, what's useful, what's appropriate for the situation. We're going to spend less time generating first drafts and more time recognizing when a draft serves its purpose. Less time creating spreadsheets and more time knowing which numbers actually matter.
So cultivate your taste. Trust your gut when something feels off. Don't be afraid to push back on technically correct but contextually wrong outputs. Be specific in your feedback—not "make it better" but "this section is too abstract" or "you're missing the political dynamics here." And remember that being a flexible tool user means adapting not just to new tools, but to new ways of creating value with those tools.
The models will keep getting smarter. Make sure your taste keeps getting sharper. Because in the end, that's what remains uniquely human: the ability to know what matters, what works, and what's worth doing in the first place. A surprising amount of the rest is becoming…just computation.












