We live in a hyper-inflating knowledge economy, and everything we think we knew about jobs and college is eroding. So what do we do? This post lays out a path forward for an AI future...
Loved this. I might add one more skill. The ability to deeply understand the user/stakeholder problem and build toward that versus what you (the engineer, manager, etc.) want the problem to be. We too easily hone in on proxy problems to solve, rather than spend the time to truly understand the underlying user need.
The answer emerging from successful AI adoptions: we shift from "I am what I know" to "I am how I connect, judge, and create meaning from what machines know."
I've often told high school and college classes that their imagination is the currency of their lives and that human connection is the bank. I know, it sounds esoteric, but it isn't.
AI can make a huge difference anytime we're dealing with tame/solvable problems, like picking the optimal route for a courier or comparing an X-ray scan with a million others. It cannot (yet) do much when we're dealing with wicked/unsolvable problems, like what is a humanistic way to algorithmically manage a gig work platform or does it still make sense to keep treating a dying patient?
AI takes away the complicated problems and leaves to us the complex ones. It tackles all information and knowledge but it leaves us the wisdom. If we care to have any.
I mainly use ChatGPT and cursor. I’d like to see them generalize all of the input that I’ve ever given them so we are evolving on the relationship and the knowledge base that we have created with each other instead of me having to keep referencing specifics. It would help with flow / the ability to forward think / come to the conclusion or the next pitfall together / light up blind spots.
Amber, I'm working hard on a system that will actively 'remember' all your conversations and create an 'enduring' picture of you. I'd love to talk more about your experience and what's you're looking for. ken@8thfold.com
This is gold! Thank you for posting this. I have been writing for parents, to help them make sense of the world, and then guide their kids. Your approach and logic to this is spot on.
I was stuck in trying to connect the dots on university education and the next level of new jobs to be created. It’s true, a rigged system makes you question everything. Cheers Nate.
Another skill I would add to the list is Critical Thinking.
Good tips. My mental model for this topic is to essentially try to be a real world. Susan Calvin from Isaac asimov's iRobot series. Dr. Calvin is a robot psychologist essentially and debugs various failure modes that occur to robots within asimov's stories and essentially building that detective reasoning and understanding of robots seems pretty apt. And comparable for a large language models now
I had an interesting development this morning. It might be that the place to be looking is what Deleuze and Guattari call 'the virtual'. I was working on Cogito, the conversational enabling system i'm building with Claude. We don't store the content of conversations, we store the patterns found in the conversations. We stumbled into D & G territory and began to wonder if the big impact of AI will on be on the plane of the 'actual', but on the plane of 'the virtual', the horizon of possibility.
"We don't store the content of conversations, we store the patterns found in the conversations."--this gets at an idea kicking around that we're effectively generating tokens internally when we remember, because we're reconstructing the memory. Interesting stuff!
I work in the legal field and one of the most important things that we do in practice is engage people in natural conversation, negotiation, and conflict without the assistance or barrier of technology in between us. There’s something to be said for gaining and growing trust through those people skills. I know that the LLM‘s can pick up on nuances for us because we provide them so much of our personal information but we don’t get that same depth on the other parties. This is where we still need the skill to develop the ability to read people to engage with them in dynamic and empathetic conversation that also can pick up on the nuances. AI can do a good job as a boxing coach in the corner for us, but we still need to be able to provide that interactive skill and empathy apart or adjacent to it.
boxing coach is a good analogy actually. I can tell a good lawyer by exactly that—the ability to have an incredibly high utility natural sounding conversation. There’s an art to it that doesn’t come through in typical text corpus training.
Thanks for the NAIT perspective. Your dissection is profound. Yet you leave meat on this bone for some to pick. I told my boss soon he won’t need me. He said, noo you’re my eyes and eyes and ears out there. Validates some of your knowledge.
As an old-school mechanical power engineer, it has for sometime been about experience when you hire someone. What have you done, not so much where did you go to school. The truth is people right out of school don't know much. AI developers are a small slice of the employment future. We will continue to need human makers and doers for decades to come.
Wow, this is the most coherent assessment I have read on the tension between the human and the machine as it relates to work and education. As someone in education who left a world where humans were, themselves, the task-doers (measured by output against hourly rate) I can see how human value and the purpose of knowledge acquisition must change. I am a diehard supporter of humanities in education, but the humanities are weak-footed in this encounter with AI because they’ve been eroded and rendered irrelevant by the very institutions that were supposed to uphold them. But as I read here how human value must shift from what you know to how you deploy that knowledge or wisdom in concert with AI, I realize that the education humans must engage (and revive) is ethics. For us to partner with technology in this way and ensure not just progress but human flourishing, we need to be moral. We need to be virtuous. The paradox is here is – at least as it relates to young people – LLMs make it easier for students right now. They are using it as a means to an end. What teachers and others resent is the absence of “suffering.” You should suffer to write that paper. You should HAVE to go into a library and sift through the haystack before you find the needle. Why? Because we build virtue through challenges – patience, courage, prudence, honesty, etc. I really do think there is an undercurrent of fear, not just because we fear the machine, but we fear our own inability to direct the machine in right ways. We fear our atrophied muscles of discernment. Ethics, morality, virtue – I daresay practical philosophy – could become central to a more values based education that cultivates emotional intelligence and the capacity for sound judgment. All of that can be done by reorienting curriculums and redefining what education is truly supposed to do. Thank you!
Nate, this was very thought-provoking. You have given me some ideas for articles. One concern is the inability of most employers to value the insights, imagination, or curiosity a person brings to a job. Are people with the ability to use AI to augment these abilities better off starting their own business?
Loved this. I might add one more skill. The ability to deeply understand the user/stakeholder problem and build toward that versus what you (the engineer, manager, etc.) want the problem to be. We too easily hone in on proxy problems to solve, rather than spend the time to truly understand the underlying user need.
oooooh this is a great one! strong agree. Peeling the onion is not something I see AI pushing on today, nor do I see much progress in that direction
This is perfect:
The answer emerging from successful AI adoptions: we shift from "I am what I know" to "I am how I connect, judge, and create meaning from what machines know."
I've often told high school and college classes that their imagination is the currency of their lives and that human connection is the bank. I know, it sounds esoteric, but it isn't.
Great post!
My kid rolls her eyes when I tell her stuff like that but… it’s still true
Matthew I know this message sounds crazy but I’ll be glad if you don’t mind texting 💬 me on what’s app! I have something to share with you.
╋ 𝟰𝟰 (752) 941-9335
Our kids have been raised on tech and eastern spirituality so they get it. :)
AI can make a huge difference anytime we're dealing with tame/solvable problems, like picking the optimal route for a courier or comparing an X-ray scan with a million others. It cannot (yet) do much when we're dealing with wicked/unsolvable problems, like what is a humanistic way to algorithmically manage a gig work platform or does it still make sense to keep treating a dying patient?
AI takes away the complicated problems and leaves to us the complex ones. It tackles all information and knowledge but it leaves us the wisdom. If we care to have any.
yes! Honestly wicked problems is probably a whole writeup on its own—I don’t think wicked problems are widely known outside the ML community
Jurgen I know this message sounds crazy but I’ll be glad if you don’t mind texting 💬 me on what’s app! I have something to share with you.
╋ 𝟰𝟰 (752) 941-9335
I mainly use ChatGPT and cursor. I’d like to see them generalize all of the input that I’ve ever given them so we are evolving on the relationship and the knowledge base that we have created with each other instead of me having to keep referencing specifics. It would help with flow / the ability to forward think / come to the conclusion or the next pitfall together / light up blind spots.
the amnesia is really tough isn’t it?!
Great Ken. I’ll shoot you an email.
Amber, I'm working hard on a system that will actively 'remember' all your conversations and create an 'enduring' picture of you. I'd love to talk more about your experience and what's you're looking for. ken@8thfold.com
This is gold! Thank you for posting this. I have been writing for parents, to help them make sense of the world, and then guide their kids. Your approach and logic to this is spot on.
I was stuck in trying to connect the dots on university education and the next level of new jobs to be created. It’s true, a rigged system makes you question everything. Cheers Nate.
Another skill I would add to the list is Critical Thinking.
thanks for the kind words! I’m so glad it’s helpful for you. Yep, critical thinking is definitely one I emphasize as a parent lol
Yup, the skill that can’t be taken away, or easily replaced.
Gurcharan I know this message sounds crazy but I’ll be glad if you don’t mind texting 💬 me on what’s app! I have something to share with you.
╋ 𝟰𝟰 (752) 941-9335
Good tips. My mental model for this topic is to essentially try to be a real world. Susan Calvin from Isaac asimov's iRobot series. Dr. Calvin is a robot psychologist essentially and debugs various failure modes that occur to robots within asimov's stories and essentially building that detective reasoning and understanding of robots seems pretty apt. And comparable for a large language models now
oh that’s an interesting callback! Makes sense
Dan I know this message sounds crazy but I’ll be glad if you don’t mind texting 💬 me on what’s app! I have something to share with you.
╋ 𝟰𝟰 (752) 941-9335
I had an interesting development this morning. It might be that the place to be looking is what Deleuze and Guattari call 'the virtual'. I was working on Cogito, the conversational enabling system i'm building with Claude. We don't store the content of conversations, we store the patterns found in the conversations. We stumbled into D & G territory and began to wonder if the big impact of AI will on be on the plane of the 'actual', but on the plane of 'the virtual', the horizon of possibility.
"We don't store the content of conversations, we store the patterns found in the conversations."--this gets at an idea kicking around that we're effectively generating tokens internally when we remember, because we're reconstructing the memory. Interesting stuff!
i do believe studies of human memory have found that only small 'kernels' are actually stored and then the 'memory' is 'reconstituted' on demand
I work in the legal field and one of the most important things that we do in practice is engage people in natural conversation, negotiation, and conflict without the assistance or barrier of technology in between us. There’s something to be said for gaining and growing trust through those people skills. I know that the LLM‘s can pick up on nuances for us because we provide them so much of our personal information but we don’t get that same depth on the other parties. This is where we still need the skill to develop the ability to read people to engage with them in dynamic and empathetic conversation that also can pick up on the nuances. AI can do a good job as a boxing coach in the corner for us, but we still need to be able to provide that interactive skill and empathy apart or adjacent to it.
boxing coach is a good analogy actually. I can tell a good lawyer by exactly that—the ability to have an incredibly high utility natural sounding conversation. There’s an art to it that doesn’t come through in typical text corpus training.
Charles I know this message sounds crazy but I’ll be glad if you don’t mind texting 💬 me on what’s app! I have something to share with you.
╋ 𝟰𝟰 (752) 941-9335
🤣 oh no
Thanks for the NAIT perspective. Your dissection is profound. Yet you leave meat on this bone for some to pick. I told my boss soon he won’t need me. He said, noo you’re my eyes and eyes and ears out there. Validates some of your knowledge.
Yep, I think your boss is right :)
As an old-school mechanical power engineer, it has for sometime been about experience when you hire someone. What have you done, not so much where did you go to school. The truth is people right out of school don't know much. AI developers are a small slice of the employment future. We will continue to need human makers and doers for decades to come.
My grandad was a plumber and i had some memorable summers working on houses he flipped—have a very visceral understanding of this!
Wow, this is the most coherent assessment I have read on the tension between the human and the machine as it relates to work and education. As someone in education who left a world where humans were, themselves, the task-doers (measured by output against hourly rate) I can see how human value and the purpose of knowledge acquisition must change. I am a diehard supporter of humanities in education, but the humanities are weak-footed in this encounter with AI because they’ve been eroded and rendered irrelevant by the very institutions that were supposed to uphold them. But as I read here how human value must shift from what you know to how you deploy that knowledge or wisdom in concert with AI, I realize that the education humans must engage (and revive) is ethics. For us to partner with technology in this way and ensure not just progress but human flourishing, we need to be moral. We need to be virtuous. The paradox is here is – at least as it relates to young people – LLMs make it easier for students right now. They are using it as a means to an end. What teachers and others resent is the absence of “suffering.” You should suffer to write that paper. You should HAVE to go into a library and sift through the haystack before you find the needle. Why? Because we build virtue through challenges – patience, courage, prudence, honesty, etc. I really do think there is an undercurrent of fear, not just because we fear the machine, but we fear our own inability to direct the machine in right ways. We fear our atrophied muscles of discernment. Ethics, morality, virtue – I daresay practical philosophy – could become central to a more values based education that cultivates emotional intelligence and the capacity for sound judgment. All of that can be done by reorienting curriculums and redefining what education is truly supposed to do. Thank you!
I’m so glad it resonated!
Nate, this was very thought-provoking. You have given me some ideas for articles. One concern is the inability of most employers to value the insights, imagination, or curiosity a person brings to a job. Are people with the ability to use AI to augment these abilities better off starting their own business?
It seems my AI s have not escaped the straight jacket of human imposed "policy"