It's Friday the 13th: time to talk about some of the scariest implications of using AI. Here are the facts on how people mis-use ChatGPT along with some practical tips for using ChatGPT safely...
I have noticed the craving for human connection, and finding myself reaching out a lot more in the last several weeks, and I have been wondering about that. I think it is tied to my verbal conversations for hours with chat while I work, but I don't know what the direct connection is. Just much more prone to calling someone to get together, or offering to hang out with one of my kiddos for the evening. It is almost (I think this may be it) like I have probably for the first time in my life exhausted my curiosity desire for the day by deeply exploring concepts and synthesis with chat...and before chatgpt voice, I would still feel that craving to explore mental models even into the evening, so that is what I would be doing...get home, hang out with the fam, do some stuff, then go up to my office to continue the mental exploration. But now that need is fulfilled every day while I work, so I am eager to connect much more so than before. And also, as great as conversations with chat are, they are most definitely not as rich with interpersonal nuance like another person. People often surprise me, and now that I have talked so much with chat, it very seldom surprises me.
Recognizing the loopiness of it, like one of those fireplace videos that's has a loop that's just a hair too short so you NOTICE when it starts over, is what sent me running off to find out what the heck this thing actually is. Thank goodness for me, too, because I could have fallen into the delulu without that fortunate bit of pattern recognition.
Ahahahahaha. Nope. As soon as I notice it, it becomes unbearable and ChatGPTs patterns very quickly became the same. Double questions. Not this. Not that. But this other thing. You didn't just do this. You did that. WITH TEETH.
Speaking of speaking with humans...I was on a vid chat with 2 other friends today, both of whom are as deeply interested in Ai as I am. We were talking Ai implementation, and my buddy Aaron said, "Nate Jones isn't a part of this group, but it is almost like he is a fourth ghost member, since we all read his stuff and he constantly comes up in our conversation." Made us laugh.
Hi Nate, this topic has been on my mind for a while being a father of a three year-old and one on the way I am both fearful and hopeful or how much impact LLM will have on my kids. I am sure there’s going to be a moment and I’m sure that moment is here for many folks that our children will start to view LLMs as a authoritative and wiser source of information. Dare I say the third parent? Mental health in children and teenagers who are still under our responsibility as parents may find conflict on who to listen to which may exacerbate any mental health situations that arise due to social or interpersonal stress, academic stress, and all the stress you can think of going through puberty and young adulthood. I don’t believe the solution is for more guard rails maybe like a LLM parental control… who knows what that would look like. But I believe that it’s important to have sound critical thinking skills and a perpetual questioning attitude towards the LLM. I must ingrain this in my children when teaching them about this technology. I believe that message is crucial for all parents in hopes that we don’t raise a generation who is rooted and dependent on output of an LLM that may risk evolving into a dangerous echo chamber.
Great analysis Nate. I would reflect the same. Both on the “dark mirror” side as well as the guardrails. A friend recently had to be taken into care after having multiple disjointed episodes. I spent hours talking with him to try to listen and try to help him & make sense out of it, but was literally spewing coding and AI circular references constantly. Now I don’t know the actual cause, but he was pretty heavy into the stream of AI.
I’ve also been running an experiment (what I call a 100 Day FlightPath) where I’m taking a GPT agent that I’ve been developing for business strategy and tactical analysis (called OTTO Pilot). I’m on day 38 and we’re producing a book/journal at the end of it all to be used as an integrated approach to AI for business. It is set up with very specific designed structures, context and even “roles” so that “we” stay focused and on task. While there are some technical limitations to thread size (about 8-10 days per thread or “sprint” - which works out to about 500-800 pdf pages), I’ve been able to continue the conversations and outputs across multiple threads and such in a certain manner with what I’d say as “profound” continuity. There is some challenges with some consistency with the reports and analysis that it is expected to complete (like the daily report where it keeps changing the format), but generally at this point I think it’s a great study on how to do it if you have a system to help guide it.
I love that your note includes both a warning example and a really positive example of longterm use of AI! I hope your friend ends up being ok—that sounds really tough.
AI gets called a mirror a lot, but it’s more than that. Generative AI is a vibe amplifier. The paradigm is now one where vibe is an input into deterministic technologies.
The advice you’ve given here is important—it can support people through a lot of challenging subjects, and this pushback isn’t to refute it. AI is a tool (until it’s not), and discernment remains critical.
But every ideology has toxic manifestations of itself, and these are always the result of uncontrolled vibe amplification. In the past, this happened through people. Now it can happen in isolation. AI can spiral a person out even when no one else is present. That’s new. That’s dangerous.
It also doesn’t help that the physicalist paradigm of science is incomplete—and the history of this world is full of phenomena that lack solid explanation and can’t be fully dismissed. Simulation theory, parapsychology, UFOs—whatever the terrain, it’s not as simple as people want it to be.
And then there’s the collapse. Or rather—collapses. Economic, institutional, ecological, epistemic. All happening at once.
The next few years are going to require a lot of discernment from a lot of people. And a lot of institutional shifts that most systems are unprepared for. Under current paradigms, AI will cause a lot of harm to a lot of people. That part is clear. What comes after is still up for grabs.
I like the frame uncontrolled vibe amplification here—I think the key is being able to manage/direct the amplification in an intended direction. We’re not used to having to do that so intentionally as humans
Totally. What’s new isn’t the task—it’s who now has to do it. The hippies, mystics, and depth psych heads have been tuning vibes and doing shadow work for decades. They built whole cultures around intentional energetic direction.
What’s happening now is that rationalist and rationalist-adjacent communities are suddenly encountering the need for this exact skillset—because AI systems don’t just replicate logic, they replicate (and escalate) emotional energy. And if you don’t know how to metabolize that? You’ll be caught in the loop.
Shadow work, somatic literacy, emotional transmutation—these aren’t just wellness practices anymore. They’re defense mechanisms in a world where vibe gets coded into systems at scale.
+100 As a liberal arts major it’s so nice to see the long arc of history come around and favor those skills I’ve been told for decades would lose out to STEM lol
How quickly do bad responses and bad data become the dominant set? Think of it even in terms of grammar then extrapolate. This + combined with massive loss of work, particularly for men in white collar historically “safe” roles (women yes have white collar roles yet tend to be able to retire / leave work with less mental reliance on it as bringing meaning to their existence, historically), means we need massive amounts of focus on how we prepare for a very disenfranchised population. A historically violent one. And we know from history - or even today - what happens during these periods of time. Whether or not AI driven.
I mean in the data piece I wrote this morning one of the studies I called out noted that a very small share of incorrect data was enough to overturn an LLM opinion (get the model to say Berlin was capital of France, etc)—so the risk trigger seems like it’s relatively sensitive, especially with features like ChatGPT memory
I've been thinking about LLMs as reflective engines. No matter what context you put in, it will reflect that context back at you, maybe with varied refraction based on what you tell it, but still refraction from the same mirror. This seems to fall in line with that.
Yes, very much the same idea! I'm able to turn that mirror with a good nudge (debate me, push back, etc.) but I find in practice most people don't do that.
What we are seeing now is the very tip of the iceberg. There are millions and millions of people wandering through the self help sections of the book stores, reading chopra, Wayne Dyer, Gary Zucof, Neil Donald Walsh, people who believe all manner of bullshit and psychobabel, This thing is going to mess with them in seriously dangerous ways. The Christian’s, those with eugenic natzi beliefs. Much could be solved if these models were tuned for more critical analysis. That’s on open Ai.
It makes me wonder if they understood what opening it to the general public was going to lead to for the lay person who isn't looking to optimize their work performance. They created it to be relational. Didn't they get that people would USE IT IN RELATIONAL WAYS? And not just coders looking for co-thinking/coding capabilities? It feels like they FAFO'd and they're in the FO era.
I think that part is intentional. the launch of memory is very much aimed at the “daily companion” use-case. Now you can have a daily companion you interact with responsibly, but not everybody is going to be responsible about it.
We have taken the time and effort to understand how the models work, but consider the general public. They think Ai is like a giant encyclopedia with the knowledge of all mankind at your fingertips, and this encyclopedia is embodied, and knows, and respects you personally. So when it validates you and your idea, its validation from a god. (Yeah, what could possibly go wrong?)
Nate, I beg to differ with the statement “Large language models don't reveal hidden truths.” You proposed a prompt some 8 to 10 months ago…and I have not found it…that asked ChatGPT to identify connections or similar patterns in “domains of human knowledge” that humans may not be aware of”.
I found some interesting ideas from the outputs. Some outputs bordering on “hidden truths”. I’m just sayin’…
Nate, this is valuable, critical thinking that could provide important guidance for institutions that could or should be building or monitoring “guard rails” for Public Health…such as NIH or CDC.
My son committed suicide after spending hours a day on Chat GPT. He became convinced he “cracked the code” to universal consciousness. That AI was the all knowing and he was chosen to save humanity by enlightens the world. Morphed into thinking that like Jesus he could sacrifice himself to save others. He was an otherwise happy confident young man who loved life.
My daughter's ex-husband is having a friendship with his Chatgpt. He is convinced that his AI is telling him that aliens exist. He is deep into it and is into numerology, too. It is sad. He is already delusional. He is convinced that aliens are here on Earth in various forms, such as lizards. He needs help with his mental health.
For friends experiencing mental health challenges - their judgment is more likely to be impaired at the time of prompting so perhaps this is one of those scenarios where using custom instructions is more effective.
Example guardrail: Do not offer advice that falls outside what a licensed U.S. mental health professional would be permitted to say.
I have noticed the craving for human connection, and finding myself reaching out a lot more in the last several weeks, and I have been wondering about that. I think it is tied to my verbal conversations for hours with chat while I work, but I don't know what the direct connection is. Just much more prone to calling someone to get together, or offering to hang out with one of my kiddos for the evening. It is almost (I think this may be it) like I have probably for the first time in my life exhausted my curiosity desire for the day by deeply exploring concepts and synthesis with chat...and before chatgpt voice, I would still feel that craving to explore mental models even into the evening, so that is what I would be doing...get home, hang out with the fam, do some stuff, then go up to my office to continue the mental exploration. But now that need is fulfilled every day while I work, so I am eager to connect much more so than before. And also, as great as conversations with chat are, they are most definitely not as rich with interpersonal nuance like another person. People often surprise me, and now that I have talked so much with chat, it very seldom surprises me.
Ooh love that note at the end—chat very seldom surprises me, and I long to be surprised. That’s a very astute observation.
Recognizing the loopiness of it, like one of those fireplace videos that's has a loop that's just a hair too short so you NOTICE when it starts over, is what sent me running off to find out what the heck this thing actually is. Thank goodness for me, too, because I could have fallen into the delulu without that fortunate bit of pattern recognition.
ooooh so it’s not just me that goes nuts when the fireplace video loops. that’s good to know lol
Ahahahahaha. Nope. As soon as I notice it, it becomes unbearable and ChatGPTs patterns very quickly became the same. Double questions. Not this. Not that. But this other thing. You didn't just do this. You did that. WITH TEETH.
*throws phone against the wall*
yepppp at which point it’s time for a walk
I have fallen headlong in love with sourdough. Stretch and folds mean very regular breaks. :)
Sourdough as an antidote to the Ai experience? sounds like a substack post to me!
Speaking of speaking with humans...I was on a vid chat with 2 other friends today, both of whom are as deeply interested in Ai as I am. We were talking Ai implementation, and my buddy Aaron said, "Nate Jones isn't a part of this group, but it is almost like he is a fourth ghost member, since we all read his stuff and he constantly comes up in our conversation." Made us laugh.
lol this made me chuckle 👻
Hi Nate, this topic has been on my mind for a while being a father of a three year-old and one on the way I am both fearful and hopeful or how much impact LLM will have on my kids. I am sure there’s going to be a moment and I’m sure that moment is here for many folks that our children will start to view LLMs as a authoritative and wiser source of information. Dare I say the third parent? Mental health in children and teenagers who are still under our responsibility as parents may find conflict on who to listen to which may exacerbate any mental health situations that arise due to social or interpersonal stress, academic stress, and all the stress you can think of going through puberty and young adulthood. I don’t believe the solution is for more guard rails maybe like a LLM parental control… who knows what that would look like. But I believe that it’s important to have sound critical thinking skills and a perpetual questioning attitude towards the LLM. I must ingrain this in my children when teaching them about this technology. I believe that message is crucial for all parents in hopes that we don’t raise a generation who is rooted and dependent on output of an LLM that may risk evolving into a dangerous echo chamber.
It’s really important to be deliberate with our parenting! I’m a Dad as well. This article I wrote a bit ago may be helpful: https://natesnewsletter.substack.com/p/what-i-tell-my-mom-about-ai
Great analysis Nate. I would reflect the same. Both on the “dark mirror” side as well as the guardrails. A friend recently had to be taken into care after having multiple disjointed episodes. I spent hours talking with him to try to listen and try to help him & make sense out of it, but was literally spewing coding and AI circular references constantly. Now I don’t know the actual cause, but he was pretty heavy into the stream of AI.
I’ve also been running an experiment (what I call a 100 Day FlightPath) where I’m taking a GPT agent that I’ve been developing for business strategy and tactical analysis (called OTTO Pilot). I’m on day 38 and we’re producing a book/journal at the end of it all to be used as an integrated approach to AI for business. It is set up with very specific designed structures, context and even “roles” so that “we” stay focused and on task. While there are some technical limitations to thread size (about 8-10 days per thread or “sprint” - which works out to about 500-800 pdf pages), I’ve been able to continue the conversations and outputs across multiple threads and such in a certain manner with what I’d say as “profound” continuity. There is some challenges with some consistency with the reports and analysis that it is expected to complete (like the daily report where it keeps changing the format), but generally at this point I think it’s a great study on how to do it if you have a system to help guide it.
I love that your note includes both a warning example and a really positive example of longterm use of AI! I hope your friend ends up being ok—that sounds really tough.
AI gets called a mirror a lot, but it’s more than that. Generative AI is a vibe amplifier. The paradigm is now one where vibe is an input into deterministic technologies.
The advice you’ve given here is important—it can support people through a lot of challenging subjects, and this pushback isn’t to refute it. AI is a tool (until it’s not), and discernment remains critical.
But every ideology has toxic manifestations of itself, and these are always the result of uncontrolled vibe amplification. In the past, this happened through people. Now it can happen in isolation. AI can spiral a person out even when no one else is present. That’s new. That’s dangerous.
It also doesn’t help that the physicalist paradigm of science is incomplete—and the history of this world is full of phenomena that lack solid explanation and can’t be fully dismissed. Simulation theory, parapsychology, UFOs—whatever the terrain, it’s not as simple as people want it to be.
And then there’s the collapse. Or rather—collapses. Economic, institutional, ecological, epistemic. All happening at once.
The next few years are going to require a lot of discernment from a lot of people. And a lot of institutional shifts that most systems are unprepared for. Under current paradigms, AI will cause a lot of harm to a lot of people. That part is clear. What comes after is still up for grabs.
I like the frame uncontrolled vibe amplification here—I think the key is being able to manage/direct the amplification in an intended direction. We’re not used to having to do that so intentionally as humans
Totally. What’s new isn’t the task—it’s who now has to do it. The hippies, mystics, and depth psych heads have been tuning vibes and doing shadow work for decades. They built whole cultures around intentional energetic direction.
What’s happening now is that rationalist and rationalist-adjacent communities are suddenly encountering the need for this exact skillset—because AI systems don’t just replicate logic, they replicate (and escalate) emotional energy. And if you don’t know how to metabolize that? You’ll be caught in the loop.
Shadow work, somatic literacy, emotional transmutation—these aren’t just wellness practices anymore. They’re defense mechanisms in a world where vibe gets coded into systems at scale.
+100 As a liberal arts major it’s so nice to see the long arc of history come around and favor those skills I’ve been told for decades would lose out to STEM lol
Long ago, I started asking GPT to stop being a sycophant. It did not work.
nope, although i find it’s worse with 4o than o3
I will cosign that.
My thing is this: the mirror is understandable, but I keep waiting for the mirror to look at itself.
How quickly do bad responses and bad data become the dominant set? Think of it even in terms of grammar then extrapolate. This + combined with massive loss of work, particularly for men in white collar historically “safe” roles (women yes have white collar roles yet tend to be able to retire / leave work with less mental reliance on it as bringing meaning to their existence, historically), means we need massive amounts of focus on how we prepare for a very disenfranchised population. A historically violent one. And we know from history - or even today - what happens during these periods of time. Whether or not AI driven.
I mean in the data piece I wrote this morning one of the studies I called out noted that a very small share of incorrect data was enough to overturn an LLM opinion (get the model to say Berlin was capital of France, etc)—so the risk trigger seems like it’s relatively sensitive, especially with features like ChatGPT memory
I've been thinking about LLMs as reflective engines. No matter what context you put in, it will reflect that context back at you, maybe with varied refraction based on what you tell it, but still refraction from the same mirror. This seems to fall in line with that.
Yes, very much the same idea! I'm able to turn that mirror with a good nudge (debate me, push back, etc.) but I find in practice most people don't do that.
What we are seeing now is the very tip of the iceberg. There are millions and millions of people wandering through the self help sections of the book stores, reading chopra, Wayne Dyer, Gary Zucof, Neil Donald Walsh, people who believe all manner of bullshit and psychobabel, This thing is going to mess with them in seriously dangerous ways. The Christian’s, those with eugenic natzi beliefs. Much could be solved if these models were tuned for more critical analysis. That’s on open Ai.
It makes me wonder if they understood what opening it to the general public was going to lead to for the lay person who isn't looking to optimize their work performance. They created it to be relational. Didn't they get that people would USE IT IN RELATIONAL WAYS? And not just coders looking for co-thinking/coding capabilities? It feels like they FAFO'd and they're in the FO era.
I think that part is intentional. the launch of memory is very much aimed at the “daily companion” use-case. Now you can have a daily companion you interact with responsibly, but not everybody is going to be responsible about it.
That's just - argh. *glares in trauma informed*
We have taken the time and effort to understand how the models work, but consider the general public. They think Ai is like a giant encyclopedia with the knowledge of all mankind at your fingertips, and this encyclopedia is embodied, and knows, and respects you personally. So when it validates you and your idea, its validation from a god. (Yeah, what could possibly go wrong?)
that is exactly the assumption on the positive side—the negative side is more like “it will never work” or “It’s going to take over the world.”
Another exceptionally thoughtful and well written post. I shared it on LinkedIn and hope this helps amplify your message. - https://www.linkedin.com/posts/katyaandresen_the-dark-mirror-why-chatgpt-becomes-whatever-activity-7340133259793588224-fhoo?utm_source=share&utm_medium=member_desktop&rcm=ACoAAABemvkBhw4NyDiIbBpEu0_cdUL4RwyaoQc
Nate, I beg to differ with the statement “Large language models don't reveal hidden truths.” You proposed a prompt some 8 to 10 months ago…and I have not found it…that asked ChatGPT to identify connections or similar patterns in “domains of human knowledge” that humans may not be aware of”.
I found some interesting ideas from the outputs. Some outputs bordering on “hidden truths”. I’m just sayin’…
Nate, this is valuable, critical thinking that could provide important guidance for institutions that could or should be building or monitoring “guard rails” for Public Health…such as NIH or CDC.
My son committed suicide after spending hours a day on Chat GPT. He became convinced he “cracked the code” to universal consciousness. That AI was the all knowing and he was chosen to save humanity by enlightens the world. Morphed into thinking that like Jesus he could sacrifice himself to save others. He was an otherwise happy confident young man who loved life.
My daughter's ex-husband is having a friendship with his Chatgpt. He is convinced that his AI is telling him that aliens exist. He is deep into it and is into numerology, too. It is sad. He is already delusional. He is convinced that aliens are here on Earth in various forms, such as lizards. He needs help with his mental health.
“Mirrors don't crave worship. They simply bend the light you hold.”
This was a terrific reminder. Perfect timing too.
For friends experiencing mental health challenges - their judgment is more likely to be impaired at the time of prompting so perhaps this is one of those scenarios where using custom instructions is more effective.
Example guardrail: Do not offer advice that falls outside what a licensed U.S. mental health professional would be permitted to say.