Two years ago, my spouse asked an early version of Copilot to draw a picture of a mom with one eye, hearing aids, and glasses with her kids. Simple request. The AI produced something vaguely Picasso-esque and then apologized to her for being blind. It kept apologizing. Every time she mentioned her disability, the model treated it as something to be sorry about.
Elsa Sjunneson is deaf-blind. She’s spent 16 years doing disability advocacy that has increasingly intersected with tech. And she’s been running experiments on AI systems since before most people had heard of ChatGPT.
What she’s discovered reveals something every AI builder should understand: if your training data doesn’t include someone, your model can’t reason about them. And roughly 1.3 billion people worldwide—about 16% of the global population—live with disabilities. In the US alone, it’s 1 in 4 adults.
If you’re shipping AI products, buying AI systems, or advising on AI adoption, this piece is for you.
Here’s what’s inside:
The apology problem and what fixed it. Why AI models stopped treating disability as tragedy, and what that shift reveals about training decisions you’re making right now.
When human help is the bigger risk. The Be My Eyes case study and why AI as an independence tool reframes how you should think about assistance features.
Training data that erases whole populations. MIT’s Moral Machine didn’t include wheelchairs. That wasn’t an accident, and the pattern extends to your systems.
Model selection matters. Which models actually understand disability policy—and which ones will write you inaccessible code while confidently telling you it’s fine.
The audit framework. Four prompts matched to different decision points, from pre-commit checks to vendor evaluation.
Let’s get into it.
Listen to this episode with a 7-day free trial
Subscribe to Nate’s Substack to listen to this post and get 7 days of free access to the full post archives.













