Nate’s Substack

Nate’s Substack

Share this post

Nate’s Substack
Nate’s Substack
Dreaming Lies: Why AI Hallucinates, and How to (Mostly) Stop It

Dreaming Lies: Why AI Hallucinates, and How to (Mostly) Stop It

This is your complete guidebook to hallucinations: why they happen, how they happen differently across different models and tools, and 10 specific tips to dramatically reduce them and do useful work!

Nate's avatar
Nate
Mar 20, 2025
∙ Paid
13

Share this post

Nate’s Substack
Nate’s Substack
Dreaming Lies: Why AI Hallucinates, and How to (Mostly) Stop It
1
Share

I recently started an ideas thread for paid Substack subscribers, and this post was one of the top requests! I’d actually been noodling on an idea for a hallucination piece for awhile, and so I got my notes together and put this longer read together. I wanted to talk about the research around hallucinations in large language model and I wanted to ground it in a human perspective—what do hallucinations look like for me? How do I responsibly compare hallucinations with how humans process error? How do I ground my expectation that an LLM is accurate, recognizing my own ask is sometimes really fuzzy? What are some practical best practices to avoid hallucinations given all this?

This post has it all, so read on (or let’s be honest let ChatGPT summarize it for you) and check out some of the best thinking I’ve been able to put together on the vexed question of hallucinations in large language models…

Subscribers get all these posts!

Keep reading with a 7-day free trial

Subscribe to Nate’s Substack to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Nate
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share