0:00
/
0:00
Transcript

2026 Sneak Peek: The First Job-by-Job Guide to AI Evolution

Exactly how 15 top jobs mutate into next year—plus the 12 massive roles emerging right now

If you’re reading this, you already sense the tectonic shifts underway—AI isn’t coming, it’s here. Let’s cut the generic forecasts: you don’t need more vague predictions about whether AI creates or kills jobs. You need precise, actionable intel on how AI reshapes your specific role, impacts your daily tasks, and shifts salaries.

Every other report treats the workforce as a broad blur. This guide is different.

It dives deeply into 15 critical roles, mapping exactly how each mutates: which tasks vanish, which elevate, and which skills command salary premiums. For example, you’ll learn why:

  • Software Engineers are seeing entry-level coding tasks vanish overnight, replaced by niche, hyper-valuable roles like GPU-cost tuning specialists who now command $250/hour.

  • Data Scientists face a rapid commoditization of traditional model-building because of AutoML—yet simultaneously find surging opportunities in AI auditing, bias detection, and ethical model oversight, commanding premiums that push salaries into the $220K range.

  • Vector Database Engineers, once obscure, are now one of tech’s hottest roles, with demand outstripping supply 4-to-1 and salaries rapidly approaching $300K.

  • Customer Success professionals face existential pressure as tier-one troubleshooting evaporates—thanks to Sam Altman openly declaring frontline support as “totally gone”—but see new roles like strategic “AI-Success Orchestrators” emerge at salaries upwards of $190K, centered around deep relationship management.

But this guide isn’t just about how existing roles evolve. It also reveals 12 entirely new roles—emerging roles without official titles yet, quietly forming the next high-value AI career opportunities. These aren’t speculative—they’re already beginning to crystallize right now, even if your org chart hasn’t caught up yet. Roles like:

  • Agent-Fleet Orchestrators, managing routing, resource allocation, and conflict resolution between multiple autonomous AI agents—essential as enterprises become agent-driven.

  • Context-Supply-Chain Managers, handling sophisticated version control for Retrieval-Augmented Generation (RAG) data, optimizing relevance, and ensuring fresh, accurate context to enterprise-scale AI models—crucial infrastructure roles you haven’t even heard about.

  • Human-Factor Tuners, blending psychology with statistical modeling, creating reinforcement learning from human feedback (RLHF) systems—the hidden architects behind future AI behaviors.

  • Red-Team Psychologists, uniquely positioned at the intersection of security and psychology, crafting adversarial prompts and systematically exploiting LLM vulnerabilities—an absolute must-have skill set in an age of constant AI jailbreak threats.

  • Carbon-Aware Schedulers, who manage AI training runs based on real-time carbon intensity and ESG commitments—already emerging in leading-edge AI-first organizations prioritizing sustainability.

  • Edge-Inference Optimizers, specializing in distillation and quantization to run AI efficiently on tiny devices—a role that will explode as intelligence moves into every physical device we touch.

Here’s why clarity at this depth matters and why you won’t find anything comparable anywhere else online:

Major industry reports (McKinsey, World Economic Forum) predict 92 million jobs displaced by 2030, while simultaneously millions of new roles emerge—but none drill down job-by-job, leaving you guessing about personal risks and opportunities. Other research from Deloitte and Microsoft confirms surging demand in certain roles—like cybersecurity specialists or AI ethics officers—but lacks specificity around exactly why these roles are exploding, or precisely how to position yourself to capture those salary spikes.

This guide exists because I’ve spoken personally with hundreds of talented, worried professionals—real people uncertain how to pivot in the face of AI-driven disruption. It matters deeply to me that you not only understand these seismic shifts but have a tactical map to navigate them confidently.

This is part 3 in a loose trilogy I’ve been crafting on AI and jobs over the last few weeks. If you’re interested, check out this note on the hardest hitting questions on AI and jobs and how to get an AI job in 2025. Those pieces focused on the job search and people’s worries. This piece is more in-depth: it focuses on the jobs themselves and how they are evolving.

Full disclosure: You won’t find anything like this elsewhere. No one else is delivering a role-by-role breakdown of AI-driven job risk, salary upside, and emerging high-value skills at this level of detail and precision. Consider this your essential guide before the AI avalanche hits. And landing in your inbox today, that map just became your most valuable career asset.

Subscribers get all these newsletters!

AI and Jobs: The Four Dynamics Reshaping Every Career in 2025

I get asked this question more than any other: "Nate, what about my job?" And those jobs are all distinct and unique. Today, we're diving into the specifics—the big dynamics, the 15 roles getting hit hardest, and exactly where the opportunities are hiding.

The TL;DR

Four seismic dynamics are reshaping work right now. AI isn't just coming for your job—it's creating massive new opportunities if you know where to look. AI proficiency now commands a ~28% salary premium, and professionals with AI skills earn ~21% more than their peers. If I had to guess, both those numbers will be higher next year. The gap is widening. And you can evolve into the AI-enabled roles adjacent to what you do best—but only if you understand what's really happening beneath the headlines.

The Four Core Dynamics That Actually Matter

Dynamic 1: Automation Avalanche - Execution Getting Cheaper

You've seen the headlines. PMs expected to prompt-engineer their way to double productivity. Engineers shipping twice as much code with Cursor. Customer success doing "other things" because AI handles the tickets.

This is execution getting cheaper. It's real, it's happening now, and frankly, it's the least interesting part of the story.

Dynamic 2: Trust Deficit - Speed Creates Quality Nightmares

Here's what nobody talks about: execution getting cheaper creates jobs because of quality and security disasters.

I know engineers who won't touch AI-generated code because it's "dirty." I know others who say it's fine—they rewrite it from scratch anyway. The point is, there are new security and quality challenges everywhere.

Example: Gina (35) is closing deals faster with ChatGPT, but she knows the bot hallucinated a roadmap feature to her customer. That's a credibility crisis waiting to happen. 80% of presales professionals are using ChatGPT in their workflow—living with this constant risk.

Sales teams feed Slack threads and decks into ChatGPT before calls. They don't prompt carefully. They get hallucinated presentations. Then the company is committed to something AI made up.

If you can't figure out how to hedge your LLM so it only answers within a specified range, if you try to inject it and it responds with nonsense, you're in trouble. These aren't edge cases—these are Tuesday afternoons in 2025.

Dynamic 3: Infrastructure Tsunami - Compute Costs Exploding

AI engineers are experiencing exceptional demand in 2025, with average salaries soaring to $206K—a huge $50K increase from the previous year. Why? Because compute costs are absolutely exploding.

Example: Mo (41) sees his monthly GPU bill eclipse his entire payroll. Not exaggeration—actual financial crisis. AI-ready data centers requiring $5.2 trillion of the $6.7 trillion compute capex by 2030. This is the largest infrastructure build in human history.

When compute costs explode, it creates a whole forest of downstream jobs. Being able to tune GPU costs is lucrative business because everyone is hemorrhaging money on AI. There are whole forests of new roles that people don't talk about much because they think of AI as scale-free. It's not.

Dynamic 4: Human-AI Boundary Crisis - Perfect Tech, Terrible UX

You can have perfect tech and still have users rage-quitting your chatbot. 77% of UX professionals name ChatGPT their #1 GenAI tool but only 45% are satisfied with results.

Humans and AI don't have interaction norms yet. There's vast confusion between the two. When someone tells you "AI is hallucinating," that sounds clear to them. But it takes someone very technical to debug that. Is it an undesired response? Lack of response? Over-complete response? Just the one someone noticed while 15 other examples went undetected?

That tension reveals the boundary crisis. Where there are problems like that, there are jobs.

The 15 Roles Getting Hit—and Where the Opportunities Are

15 Tech Roles in the AI Era

1. Product Manager

Product managers sit right in the middle of the automation and trust crisis. You need to scale your ability to manage agents, become more technical, and you've never had more value in scaling trust.

Here's what matters: You need to be able to filter all the vibe-coded ideas your organization generates. You need to be technical enough to have real conversations with engineering about AI models. You need to articulate a path forward that creates trust and deliver quality models in production.

When you do all that, you're talking about real value.

This highlights the age-old PM and MBA debate. MBAs aren't learning this stuff. If you need hard AI skills, you learn by building AI yourself or on the job—there's no other way because academics isn't keeping up.

Everyone tells you to build as a PM, and you should. But here's what people aren't telling you: the ability to earn trust in chaos matters more than anything else. Product management has always been about managing chaos and earning trust. AI multiplies that chaos, which means you have more opportunity than ever to earn trust. Nobody talks about this aspect enough.

2. Program and Project Manager

These folks are nervous when I talk to them. They worry that AI plans entire programs better than they can. AI writes Slack messages, email updates, and AI agents schedule meetings. Where does the program manager go in this scenario?

Here's what program managers really do: they're accountable for delivering against time, budget, and resources. LLMs aren't taking that accountability away from humans.

You want AI tooling that builds Gantt charts? That makes sense. You want fluency in AI to marshal resources and manage exploding AI projects? Absolutely necessary. But the heart of the role is accountability, and that isn't going anywhere.

Great program managers understand that accountability is the beating heart of their role. You still need people who are accountable for outcomes. Infrastructure costs for AI are exploding, and more money pours into AI than ever before. We need people who hold teams accountable to resource use.

3. Customer Success

This role always gets the black flag. Sam Altman talks about it going away completely, which makes me wonder why he's so focused on eliminating it.

The brilliant customer success people I know don't succeed by answering tickets—they succeed by holding relationships. That's fundamentally human work that you can't get an LLM to handle effectively.

Customer success is sticking around, but it's leaning into relationship management. The person who advocates for customers internally—aggressively challenging those pesky PMs—and who talks with sales about expansion revenue isn't getting automated. The ticket stuff is getting automated, which is great—let AI handle that piece.

The beating heart of the role is the relationship, and that directly extends the customer's lifetime value.

4. Software Engineering

I see people just out of college saying no one should study computer science anymore. Don't run away from engineering right now.

You may have to change how you engineer, but this role has evolved more times than I can count over 70 years. Software engineering is compute-enabled by definition, so of course it evolves with AI. That's not a reason to walk away from the field.

Do you know how many people need their vibe-coded work cleaned up? Insane amounts of code get generated with security holes and quality issues. Going faster creates massive speed and quality problems.

AI makes you not care about technical debt if you want the prototype fast, and I understand that approach. But if you're production deploying, you have to build well.

The beating heart of good engineering is understanding how to design durable technical systems that scale. Some engineers lean toward prototyping—that's fantastic work. Some can code something up in a weekend that shows how it works. I know engineers who do this brilliantly—they knock together something with real data in a weekend and call it a tracer bullet. People are wowed by this approach, and this skill isn't going out of style.

Can you production deploy to 100 million boxes? This skill isn't going out of style either.

Don't let AI writing code confuse you about the fundamentals. You must learn how technical systems get put together—that's the path to career leadership. Senior engineers worry about juniors over-depending on AI without understanding fundamentals. The risk isn't that there will be no jobs; the risk is that there will be no qualified people for jobs because they believe AI will write code for them. That's simply not true.

5. Executive Leadership (Senior Managers and Directors)

Senior managers and directors—those chairs are at risk. I know people who feel it in their bones, and you're kind of right to be concerned.

Look at the dynamics, because none of them favor senior managers and directors. Faster execution doesn't help when your core job isn't execution. Speed-trust-quality issues don't help because you're the one who has to deliver anyway.

Middle managers are fundamentally information bottlenecks. Their entire job has been filtering information for most of corporate history, and LLMs are already good at filtering information.

People joke that the CEO should be AI. I think middle managers are more at risk—not of being replaced by an AI agent (I don't buy that throwaway line in Project Vend about Claudius being a middle manager). The role is limited and somewhat endangered.

We'll still have directors and senior managers, but their spans will be much bigger. They'll be more stressed and will depend more on AI tooling to ladder up information flow. They'll chiefly exist as strategic accountability points.

If the company is executing a strategy, you want to hand a big piece to someone who's accountable for it—that's the director. You're going to hand that responsibility to the director, not an AI agent.

If you want to get ready for that reality, it's like the PM and program manager roles. Get good at accountability. Get good at saying, "I can take this strategy and put legs on it with the people and resources I have with almost no direction from my VP or SVP. I can just go and execute it." That's the heart of being a director.

If you are good at that, and if you are good at building AI cultures for your team, you're probably going to be okay. But don't expect that role to grow—there won't be lots more directors because the dynamics aren't in favor of job growth.

6. Data Scientists

The demand is skyrocketing for this role. People worry that data scientists might not be doing well because there are research scientists for AI now. Maybe data scientists are becoming obsolete.

It's not working out that way at all. People have so many needs for data science related to preparing their data for the AI age. This is one of the most blessed roles in the age of AI because there's so much data in the world that has to be prepared for AI applications. There's so much custom work that needs to be done at the enterprise level to suit models to datasets. Data scientists are never bored.

The heart of the role is design—it's fundamentally a creative role. People think it's not creative, but it absolutely is. I've worked with data scientists, and it's a really thoughtful role. It is not a role that is easy to automate, and it is a role where quality matters tremendously. All of those factors strongly argue for this trend of boosted demand for data science being durable. I'm quite bullish on data science.

7. DevOps and Machine Learning Operations

Demand is exploding, especially for machine learning ops. People don't know how to implement machine learning pipelines and operations effectively. If you as a DevOps person can transition from helping developers deploy software effectively to helping AI engineers effectively deploy and maintain models, you're handling automation out of new chaos patterns—which is exactly what you've always done.

What you do in DevOps is fundamentally taking the herd of cats that is a bunch of developers and figuring out how to get them into a clean production pipeline. Similarly, you now have a herd of cats that are ML engineers or AI engineers, and you have to herd them into an effective deployment pipeline and effectively manage the model as it's in production.

If you are in DevOps or machine learning ops now, the beautiful thing is you are solving human problems. Engineers are not going out of style, and you are still going to be needed to solve those human problems. Yes, there are going to be AI tools that help you with the work.

But the heart of this work is helping to align the complex work of building good software to production value. When do you deploy? Why do you deploy? How do you fix issues? What do fixes look like? How do you deploy securely? What do your different environments look like? The heart of it is getting all of that aligned so you deliver value to the customer and solve problems for engineers so they can focus on building software. Those are human problems you're solving, and those aren't going out of style.

8. UX and Human-AI Interaction Design

The problems of the human-AI boundary are real, and I see it happening all the time. The current AI interfaces are deeply imperfect and create a lot of confusion for users.

We need to understand that as execution gets faster, UI craft is becoming more valuable because the cheap stuff is becoming more commoditized. If you have really polished UI, it's going to stand out more because the sea of the internet is going to be filled with vibe-coded stuff that is not well-crafted.

Let me give you an example that just came out. Perplexity has done a really good job with UX interaction design. They just launched the ability to pass a message to the AI as you read the chain of thought in the middle of a research task. As far as I know, no one else lets you do that yet, and it's brilliant.

How many times have you as a user sat there and typed out a prompt, and then thought, "I forgot to say this"? You have to add it and then wait for the whole research prompt to run again. Not anymore. Now with Perplexity, you just pass the AI a note and it modifies its approach.

That is AI-enabled functionality, but it's also UX human interaction design. It's solving some of the human-AI boundary issues because you're recognizing the old truth that humans are better at correcting mistakes we have made than checking our work beforehand. That's why good email systems will often give you a delay on send and an undo button—they know you instinctively check your work after you send. That's UX design at work.

If you are designing for human-AI interaction, your world is getting richer with opportunities. If you are designing for humans using AI systems, it's the same kind of problem you've always solved. All of the systems we're using now are getting rapidly AI-enabled.

When people say, "Well, I'm not designing for AI," I respond, "But really?" Because almost everything is getting AI-ified at a tremendous rate. If it's not true for you now, it probably will be because your board is going to ask you to implement it soon. The trend is unbelievably pervasive.

This is not a case where UX professionals have to go and get AI experience because the AI experience is largely going to come to you. People are going to be asking you to design these interactions. The challenge for you is to think deeply about human-AI design and figure out how to build trust through interactions.

I'll give you another example that we haven't solved yet: How do you take the models we have, which are not good at taking accountability, and build in interaction dynamics that track accountability over time for models? If I tell the model, "That is incorrect, do not do that again," how can you signpost that and indicate to the user that you have instructed the LLM in a specific way? Then on the back end, can you work with AI engineers to pass that as a prompt to remind the LLM, thus improving the experience of even simple chatting because you're actually reinforcing accountability from the user?

There are a hundred different ideas like that around UX human interaction design. It's tremendous opportunity waiting to be explored.

9. Security and Red Team

There is a new AI jailbreak issue almost every day. Red teaming and security work—there are just not enough qualified people in the world. If you are able to start playing with jailbreaking, start playing with LLMs as attack services, start looking at prompts for potential vulnerabilities, start looking at systematic vulnerability databases, and start reviewing vibe-coded pieces of work for security issues, you will never be out of work.

That represents an absolutely huge area of work. It requires the same set of instincts that has made security people excel at what they do. I knew Greyhat people back in the day, and it's the same instinct to go and try to mess with systems and break them. Well, it turns out we have a whole new intelligence surface and we have to jailbreak that to make it more secure. There are huge opportunities in this space.

10. Cloud AI Infrastructure Engineers

This is where the infrastructure explosion is happening. You pay for yourself in this role by cutting spending. If you can find ways to master GPU arbitrage, to master the way you pass calls to the GPUs, and to master your cloud infrastructure build, you're literally optimizing for what is collectively the largest infrastructure build in human history.

AI data centers are on track for trillions of dollars in compute capital expenditure by 2030. We're talking about six or seven trillion dollars, something like that, and it'll probably be higher by the time we get there. They need cloud AI engineers to avoid spending more money than they have to. At that level, an engineer who can do their job well pays for their salary 10 or 100 times over in the way they handle these larger and larger fleets of GPUs.

It's an incredibly valuable occupation. If you were already working in cloud as an engineer, you're well-prepared for this transition.

11. Data Engineering

This involves figuring out how to transition from ETLs into AI pipelines. You may think AI is going to come for automated pipeline builds, but I don't think you're realizing how much data is going to be needed and how much data preparation there will be. This is the same situation as with data science—most of the failures I see in AI projects come from the data side.

If you are good at figuring out feature store governance, at figuring out vector ETLs, and if you're good at figuring out how new data types can be made accessible and useful for business use cases, that's where the value lies.

Extraordinary data engineers have always been distinguished by understanding both the technical side of the business and the customer use case. In this case, you need to understand the AI customer use case, how AI is changing what customers are expecting, the kinds of queries that are coming through, and then understand how the technical side enables that. You need to understand how vectorizing data is different from storing data traditionally. It's the same fundamental job, just with a new technology stack.

12. QA and AI Quality

This is an interesting area because we are fundamentally seeing a transformation here, and I don't know of anyone talking about it enough. Right now, we are putting most of our energy into QA software before it launches. With AI, we need to shift and put much more of our energy into QA as a durable quality threshold that is always active in production.

Why this shift? Because these systems produce probabilistic responses, which means you cannot deterministically test all this software. The value in QA now is sustaining the value of the software and guarding it over time. That's the heart of QA—sustaining the quality of the software—but you get even more work to do because there's more complexity, and because quality must be sustained over time. You can't just launch and forget the way you could with deterministic software.

This represents a major mindset shift. Most QA people I talk to are not ready for this world. They are used to P0, P1, P2 priorities—do the test and launch. That mindset won't work in the AI era. I do worry a bit, not because the jobs won't be there, but because the QA people I know aren't really thinking this way yet. This is an area where a mindset shift is absolutely critical.

13. Sales and Solutions Engineers

These roles can be very popular. Forward deployed engineers is another term for this type of work. Some people say that's different, but it's very similar. This is a case where AI is a powerful enabler for this job—you can code something up very quickly that demonstrates a personalized solution for the customer effectively.

The challenge is you also have the quality component to consider. It's on you as the forward deployed or solutions engineer to know what is actually doable from your product technically, and to vibe code or quickly code only those things that you can actually reliably deliver.

You are also on the front lines of one of the most interesting trends in B2B SaaS. Because speed and execution are getting better at the code level, it is possible to extend SaaS frameworks in ways that weren't possible before. When I was coming up in product, we were always taught to say no—PM says no. We were always taught to say no because you couldn't extend the software because it was so expensive to code.

It's not expensive to code anymore—it's cheap. If it's not expensive to code anymore, then you should be able to extend and personalize the software more, which means more opportunities for sales and solutions engineers, as long as you are careful about quality. I know a lot of solutions engineers who want to advocate for the customer and lean in on customization. Just make sure that you know the software stack you're working with and you don't overcommit to what you can deliver.

14. Edge Engineers

These are people who can put intelligence into smaller devices. This one is brand new—this is not a role that exists widely right now. There are absolutely indie hackers out there who love to build LLMs onto small devices. They think, "Let me run it on my laptop. Let me quantize it and run it on my phone. Let me compress this vision model." If you are that person, this role is going to exist specifically for you.

I know a lot of people who are like the Unix tinkerers in the 1990s and 2000s and the Linux tinkerers—they just can't stop tinkering and playing with technology. That's fantastic preparation for this kind of role. We are going to want intelligence in everything. If you think we don't, somebody is going to hire you to implement it anyway. Someone is going to hire you for the smart refrigerator and the smart toaster and the smart home robot that folds your laundry and the smart washing machine. All of them are going to need little large language models that can fit on the device and that need to be secure and that need to run on-premises.

Anyone who can figure out how to deploy intelligence at the edge—if you have the ability to talk about use cases and you're not just interested in your own technical work—you're going to have steady work.

15. Vector Database and Retrieval Engineers

This field is exploding. No one can get their hands on these people right now. If you work with RAG (Retrieval-Augmented Generation), you are in one of the most valuable places in tech, which is why I've pointed out in the past that understanding how RAG works is one of those cheat codes right now in the job market. If you are an engineer who works with RAG, you are even more valuable than you were before. It's absolutely incredible.

12 Emerging Roles That Don't Have Names Yet

The really smart money is watching roles that are emerging but don't have official titles yet:

Agent-Fleet Orchestration: Routing decisions between multiple AI agents. Conflict-resolution when agents disagree. Prompt-scheduling for optimal resource usage. Not science fiction—next quarter's org chart.

Simulation-Economy Builders: Digital-twin scenario authors creating synthetic worlds. "Behavior economists" for AI agents. Designing incentive structures for artificial entities.

Context-Supply-Chain Managers: Version control for RAG chunks. Ranking algorithms for retrieval relevance. Purging outdated context before inference. Critical for enterprise deployments.

Human-Factor Tuners: Psychologists partnering with statisticians. Crafting RLHF reward models. Understanding human preference distributions. Translating qualitative feedback to quantitative signals.

Carbon-Aware Schedulers: Shifting training runs to low-carbon grid hours. Real-time carbon intensity monitoring. Geographic workload distribution. ESG reporting for AI workloads.

AI-Risk & Compliance Leads: Mapping EU AI Act requirements to model lifecycle. SEC disclosure rules for AI usage. GDPR implications for training data. Cross-jurisdictional compliance strategies.

Synthetic-Data Foundry Operators: Generation pipeline management. Watermarking techniques for traceability. Quality assurance for synthetic datasets. Privacy guarantee validation.

Knowledge-Flow Curators: Enterprise prompt library management. Retrieval taxonomy design. Prompt versioning systems. Information architecture for AI age.

Edge-Inference Optimizers: Model distillation for tiny devices. Quantization strategies by hardware. Compiler optimization for edge. Making AI run on microcontrollers.

Red-Team Psychologists: Social engineering for AI systems. Adversarial prompt design psychology. Manipulation pattern recognition. Human psychology meets AI security.

Zero-Shot Biz-Process Designers: Rebuilding operations from scratch. Agentic workflow architecture. No legacy assumption design. AI-native process patterns.

Simulation-Economy Builders: Digital-twin scenario authors. "Behavior economists" for AI agents. Virtual market dynamics for training.

The Three-Level Framework: How to Navigate AI Job Evolution

Your job is changing faster than anyone’s told you, and you need a clear plan—not buzzwords. Here’s exactly how to act, based on where you are right now:

SURVIVE Level: Just Getting Started with AI

This is where you start if you’re feeling overwhelmed. The goal here isn’t revolution—it’s rapid, practical results:

  • Automate 10 hours per week of your repetitive, time-draining tasks immediately. Think email replies, meeting summaries, routine reports, or first drafts of docs. Tools like ChatGPT, Claude, or Cursor are ideal starting points.

  • Use AI-powered email filters and schedulers to declutter your inbox and automate calendar management—tools like Superhuman, Notion AI, or Gmail’s built-in AI features.

  • Document and track your time savings meticulously. Keep a simple weekly log to quantify your productivity wins. This isn’t optional—it’s proof you’re mastering basic AI workflows.

  • Identify the simplest, lowest-risk AI tools you can adopt today. No complex setups—just immediate results.

Goal: Free your mind and schedule from mundane tasks. Build the habit of leaning on AI first for routine productivity boosts.


ADAPT Level: Already Comfortable and Ready to Pivot

If you’re already familiar with AI basics, your next move is pairing your existing expertise with at least one solid AI hard skill:

  • Product Manager → LangChain Development: Learn to rapidly prototype product features and integrations using LangChain, building AI-powered apps in weeks, not months. Develop hands-on experience by completing structured online tutorials or a quick, four-week portfolio project.

  • Data Analyst → Vector Database Management: Become your team’s go-to expert for building and optimizing retrieval-augmented generation (RAG) systems. Get fluent in Pinecone, FAISS, Qdrant, or Weaviate through practical certification programs.

  • Engineer → Prompt Security and Jailbreaking: Become adept at identifying and mitigating prompt-injection vulnerabilities. Practice systematically testing prompts, understanding LLM attack surfaces, and applying safeguards.

  • Customer Success → AI-Driven Relationship Analytics: Shift your skills from basic ticket support to mastering AI-driven analytics for relationship management and predictive churn modeling, leveraging tools like Gainsight or ChurnZero.

  • Complete certifications and portfolio projects: Don’t just say you understand AI—show you can actually ship AI-powered features or products. Your portfolio is your proof.

Goal: Become visibly valuable—known internally as the person who smoothly translates your traditional skillset into crucial new AI workflows.


LEAD Level: Ready to Shape the AI Future

If you’re already confident in adapting your domain expertise to AI workflows, you’re ready to step up, shape strategy, and become indispensable. This means:

  • Designing new guardrails and standards your peers and industry don’t yet have, like frameworks for AI accountability, model bias management, or systematic data-quality checks.

  • Identifying emerging risk areas proactively—from compliance issues (EU AI Act, GDPR, SEC disclosures) to compute cost explosions. Be the person who spots risks and leads others away from danger.

  • Crafting repeatable frameworks: Develop clear standards or playbooks others can easily adopt. Whether it’s an internal prompt-library taxonomy or enterprise-level best practices for safe AI deployments, be the author of standards that scale.

  • Publishing, teaching, or presenting your original thinking—internally and externally. Start conversations at industry events, conferences, or high-level forums. Position yourself as a credible thought leader on how your field adapts to AI.

  • Establishing yourself as an AI-first leader: Demonstrate tangible impacts by running structured pilots or experiments. Prove business cases for AI initiatives, such as cost savings through optimized infrastructure or improved customer satisfaction through AI-driven UX.

Goal: Position yourself not just as a top-tier contributor but as the strategic leader who defines how your organization and industry leverage AI.

The Bottom Line

These four dynamics—automation avalanche, trust deficit, infrastructure tsunami, and human-AI boundary crisis—aren't abstract trends. They're creating specific problems that need specific solutions. People get paid to solve problems.

The question isn't whether AI will affect your job. The question is whether you'll position yourself where the problems are moving, or where they used to be.

97 million new roles may emerge by 2025, creating a net positive of 12 million jobs globally. But only for those who understand that AI isn't eliminating work—it's reshaping where value gets created.

The jobs I've outlined here aren't theoretical. They're being posted right now, with real salary ranges, by real companies with real budgets. The opportunities are massive.

My goal laying out these roles has been to give you a compass and a map to specifici roles. Not just general guideposts. Take this as your starting point and dive in. The world of AI awaits.

I make this Substack thanks to readers like you! Learn about all my Substack tiers here

Extra Reading

McKinsey Reports:

Stack Overflow Developer Survey 2025:

PWC Global AI Jobs Barometer 2025:

Gartner Predictions:

RAND Corporation Study:

Salary Data (Levels.fyi):

Additional Supporting Sources:

Discussion about this video

User's avatar

Ready for more?