I've been watching the engineering discourse tear itself apart.
Half the engineers I talk to think AI just gave them superpowers - they're shipping faster than ever, building MVPs in hours that used to take weeks. The other half is doom-scrolling through posts about companies halving their junior openings, replacing 350 developers with 3 engineers, predicting every job gone by 2030.
Both sides are half right. That's what makes talking about software engineering and AI in this moment so important.
The juniors I know are being told they're simultaneously more powerful than ever and completely replaceable. They're building prototypes that work but discovering they leak memory, ignore concurrency, store passwords in plain text. They feel like experts until production crashes and they realize they don't actually know how to debug what the AI built. Companies aren't hiring them because they think seniors with AI can do everything. But those same companies are creating a missing middle - in ten years, who's going to become the senior when there were no juniors to train?
The seniors are living a different nightmare. Some are thriving - using AI to amplify their expertise, becoming "elite" as salaries supposedly skyrocket for those who can orchestrate these systems. Others are watching decades of experience get commoditized, wondering if their value was just writing code that AI now generates in seconds. They're being told to evolve into "AI orchestrators" but nobody explains what that actually means when the AI hallucinates architectural decisions.
And the vibe coders - generating features at light speed, feeling productive as hell - they have that gnawing feeling they're right to have. They're moving fast but don't know what invariants are. They're shipping but can't explain why it works. They're building on top of systems they don't understand with tools that are literally probabilistic.
The discourse is missing something crucial: we're so busy debating whether AI replaces or augments engineers, we're not talking about what engineering actually is anymore. We're not addressing the brutal reality that yes, AI can generate code, but code was never the point. Systems are the point. Guarantees are the point. Understanding failure modes is the point.
Below, you'll find three resources that cut through the panic and the hype.
The first is a love letter that explains why engineering matters more, not less, when anyone can generate code. You'll discover the specific responsibilities that remain fundamentally human - translating messy business requirements into invariants that must always be true, building deterministic guarantees around probabilistic cores, debugging meaning flow when the AI misunderstands context at scale. It maps the new disciplines emerging right now - semantic engineering where you prevent prompt injection attacks, boundary engineering between deterministic and probabilistic worlds, the economics of intelligence when every token costs money and every inference has latency. Three laws that define what it means to engineer when your system accepts paragraphs from the internet as instructions.
The second is a 49 page handbook - 14 chapters of pure implementation. Not theory, not philosophy, but actual patterns that work. How to wrap non-deterministic AI in retry loops with validation. How to build semantic caches that understand "similar" questions. How to route between models based on complexity to control costs. Production RAG architectures with hybrid retrieval and reranking that won't fall over when you have real users. Observability stacks that track not just latency but semantic drift. Career paths that acknowledge the reality - juniors need to master prompt engineering AND system fundamentals, mid-levels need to own the human-AI boundaries, seniors need to understand intelligence economics and build safety frameworks. Real code, real exercises, real projects you can build to prove you're an engineer, not just someone who prompts well.
The third is an 18 page essay on software evolution - how classical engineering principles don't die, they transform. You'll see exactly how SOLID principles apply when your system includes components that hallucinate. How the Factory pattern works when you're creating model pipelines instead of objects. Why modularity becomes more critical when each module might be probabilistic. It traces the journey from deterministic to probabilistic thinking - from code-centric to data-centric design, from DevOps to MLOps, from writing code to orchestrating AI. New patterns like Cascade (chaining specialized models), Human-in-the-Loop (scaling human judgment), Shadow Deployment (testing models without affecting users), and Agentic AI (systems that don't just respond but act). Most importantly, it shows what the engineering role is becoming - less typing, more thinking, with time breakdowns showing the shift from 60% writing code to 40% architecture and validation.
This isn't about choosing sides in the AI debate. It's about becoming the engineer who thrives regardless of how this plays out. The one who understands why that AI-generated authentication system is a security nightmare. Who knows when to use a $0.001 model versus a $0.03 model. Who can explain to a regulator why the AI made that decision. Who can build systems that leverage AI's power without abdicating engineering's responsibilities.
The machines are getting smarter. The systems are getting more complex. The stakes are getting higher.
Time to stop panicking and start engineering.
PS. Interested in more engineering posts from me? You can find a big list here.
Listen to this episode with a 7-day free trial
Subscribe to Nate’s Substack to listen to this post and get 7 days of free access to the full post archives.