The AI Builder's Guide to the Rest of the World
Or how AI broke everything you know about how tech works
Written in collaboration with Sairam Sundaresan— find more at newsletter.artofsaience.com, where he shares deeply researched essays and resources to help tech professionals and leaders master AI, drive results, and accelerating their careers.
My friend Sarah’s VP struts into the office, bursting with confidence. He had a good conversation last night with ChatGPT and arrived at an important insight: "We just need to increase the temperature parameter to make our model more creative!" Sarah's face shows the cautious, diplomatic expression all builders learn when their job description expands to include "professional dream crusher." She's calculating how to explain to a non-expert that AI creativity isn't like a dial you turn up; it’s instead like asking your accountant to be "more artistic," then being surprised when they file your taxes in interpretive dance.
Meetings like Sarah's are happening every day, everywhere. They’re all symptoms of the AI Literacy Crisis: the gap between the people who understand how AI really works and people who don’t—between AI experts and AI non-experts.
In the past, the gap between the tech literate and the tech illiterate wasn’t a serious problem: the nerds figured it out, wrote the manuals, and eventually your grandmother streamed Netflix. Tech knowledge trickled down reliably from experts to non-experts. But AI breaks the normal rules of tech knowledge trickle-down. In fact, AI breaks more or less all the rules of traditional tech.
Traditional patterns of tech thinking—cause-and-effect, If-then, input-output—all fail with AI. And traditional tech systems—predictable, controllable, and fixable—don’t work the way AI systems do. The more you know about AI, the more you realize that AI literacy isn't about understanding transformers or gradient descent; it's about unlearning everything you thought you knew about how technology works.
Consequently, the more you know about AI, the more your growing knowledge and expertise pulls you away from the intuitions of non-experts, and the harder it becomes to explain to them how AI systems work. As a result, the gap between AI experts and non-experts continues to widen, not narrow, as expertise grows. That ever-widening gap has the proportions of a real crisis, not a mere problem.
Right now, as AI improves, everyone expects it to evolve the way past technologies have: controllable, predictable systems—deterministic magic. That expectation is the AI Literacy Crisis hiding in plain sight.
Traditional software breaks in predictable ways. Working on it is similar to a mechanic working on a car: you open the hood, find the broken part, and replace it. Line 47 has a bug? You fix line 47. Your loop is infinite? You add a break condition.
But AI systems don’t break like cars; they instead break like weather patterns. The model that aced your evaluation yesterday decides today is perfect for writing all responses as limericks. Your vision system that recognizes cats with 99.7% accuracy suddenly becomes convinced that your coffee mug is a small, ceramic dog with an existential crisis.
Here’s a cruel irony of AI expertise: the more you know, the less certain you become. You start speaking in probabilities. You get fluent in "Usually," "Mostly," and "It depends." Meanwhile, your non-expert stakeholders insist on certainties. They demand "Always," "Never," and "Exactly." Most AI experts aren’t trained to communicate about their work to non-experts. AI only magnifies that lack of training. As a result, they routinely struggle to explain to decision-makers why their ChatGPT-inspired ideas aren’t feasible.
WARNING: The following paragraphs contain a defense of people who drive you crazy in meetings. Reader discretion is advised.
Tech users live in a binary world: things work or they don't. Your smart home app either turns on the lights or it doesn't. There's no "well, it turned on the lights with 94% confidence, but it wasn't sure about the kitchen."
Tech builders, by contrast, live in the land of statistics and gradients. We get excited about pushing accuracy from 94% to 96%. We throw launch parties for a 2% improvement as if we just cured cancer. But users don’t see 96% accuracy as something worth celebrating. They instead see a 4% failure rate as complete unreliability because the failures happened at exactly the moment they were trying to impress their mother-in-law.
Here's an uncomfortable truth: both sides are completely right. Users are right to expect reliability, and builders are right that 96% accuracy is genuinely impressive. This isn't a communication problem; it's an expectation problem: there’s a fundamental incompatibility between how humans expect technology to work and how AI actually does work.
Think about it: every other technology we use has trained us to expect predictable behavior. When you type “3+4” into your calculator, it doesn't give you "probably 7." Yet we're suddenly asking people to be cool with systems that work differently 4% of the time—and for reasons nobody can fully explain.
Let's discuss Sarah's VP again. He’s not stupid. He’s used ChatGPT. He’s seen it write poetry, solve math problems, and engage in sophisticated reasoning. So when he suggests Sarah make her model "more creative" by tweaking a parameter, he’s operating from actual experiences that give him a sense for what AI can do. But those experiences aren’t enough to inform him what AI can’t do.
In 2025, everyone has just enough experience with AI to be dangerous. They know what's possible—they've seen the magic. What they don't understand is the gap between "works in ChatGPT" and "works reliably in production with our data, our constraints, and our liability insurance." It's like experiencing an iPhone and asking why your team can't make your prototype "more iPhone-like" by adjusting some settings.
The VP has interacted with systems representing billions of dollars of infrastructure and years of specialized fine-tuning, but he’s still not an expert. His mental model is still just "AI = smart computer that does what you ask."
Previous technologies had clear expertise boundaries. You either knew SQL or you didn't. You either understood networking or you didn't. But what’s particularly insidious about AI is that it gives users a simulation of knowing and understanding. Everyone can chat with ChatGPT, so they think they understand it. In reality, that’s like thinking you understand filmmaking because you've watched a lot of movies.
Let's be honest: we builders are partially responsible for this mess. When our model does something impressive in the development environment, we get excited and showcase capabilities without mentioning the caveats. We're like proud parents: "Look! Little GPT-4.5 can solve differential equations!" We forget to mention that it sometimes thinks “2+2 = fish” and will confidently explain why fish is mathematically correct.
Sarah's meeting is still happening. She's found her metaphor: “Making AI more creative is like asking your careful accountant to be more artistic. You might get more interesting results, but also more errors; some of those might be expensive. Plus, ‘creativity’ might mean the system writes all your financial reports as limericks—technically more creative, definitely not what the SEC wants.”
The VP nods as comprehension starts setting in. He leaves the meeting with a slightly different mental model. Instead of seeing AI as a controllable system with adjustable dials, he now sees it as a powerful, sometimes mysterious partner that requires different expectations and skills to work with effectively. More importantly, he’s likely to start asking different questions—not, "Can we make it do this thing?" but, "What would we gain, and what would we lose?" Not, "Why did it do that?" but, "How often does it do that, and can we live with it?"
That shift is AI literacy in action. It's not understanding the technical details. It's understanding that AI involves tradeoffs, that every improvement has a cost, and that every system has failure modes you can't eliminate but only manage.
Here is a practical strategy that you can use to start spreading AI literacy today. I call this the UMM framework: show Uncertainty, Make tradeoffs visible, use Metaphors humans can understand.
Stop hiding the uncertainty. Instead of "the system is 96% accurate," try "the system handles most cases well but gets cautious when unsure." Instead of "the model had unexpected behavior," try "the AI took an unanticipated creative approach."
Make the tradeoffs visible. "This update reduces catastrophic failures by 60%, but the system will be more cautious in edge cases." "We can make it more creative, but creative systems are more unpredictable." "It handles dim lighting better now, but it's slightly slower in optimal conditions."
Use human metaphors. "The AI was trained to recognize faces in good lighting. Asking it to work in dim restaurants is like asking someone who learned to drive in sunny California to navigate a Boston snowstorm. Technically possible, but expect some uncertainty."
The UMM framework is just a start. You can't close the AI literacy gap using UMM alone. The only real solution is to fundamentally change how everyone thinks about technology. Builders must pledge themselves to that cause—to a quiet conspiracy that aims at teaching the world that uncertainty isn't a bug in intelligent systems, it's a feature—the very feature that makes them intelligent. And we'll do it gradually. One confused stakeholder at a time.






The UMM framework approach is something we will use immediately. We need a disarming way to say the AI model needs another chance. Your examples there really resonate with me. Overall excellent article.
Great insights, it gets me thinking about it from now on. ✨