Best experienced on desktop
~/notes/losing-ability-to-explain-tech.md
cd .. /notes
Are We Losing the Ability to Explain Tech?
December 7, 20253 min readTECHNOLOGY

Are We Losing the Ability to Explain Tech?

Discusses the side effects of AI over-reliance, specifically focusing on how developers and architects risk losing the ability to explain complex concepts simply.

A wake-up call from one simple tweet

“One of the biggest side effects of over-relying on AI? We’re losing the ability to explain concepts to other humans.” — @uxderrick

That single line, posted on X in 2025, hit the tech timeline like a quiet thunder. It’s short, unadorned, and terrifyingly accurate—especially for anyone who writes code, designs systems, or ships products for a living.

The New Normal in Engineering Workflows Open any modern IDE in 2025 and you’ll see it: GitHub Copilot, Cursor, Claude Code, Gemini—AI is no longer a helper; it’s the co-author of most production code. Pull requests now contain entire features written in seconds. Architecture diagrams are generated with a prompt. Documentation? AI can spit out a 2,000-word README faster than most engineers can open Notion.

The output is impressive. The velocity metrics are through the roof. But something subtler is slipping away.

The Vanishing Skill of Human-to-Human Explanation Think about the last time you had to:

Whiteboard a distributed system for a skeptical principal engineer Justify a caching strategy to a staff+ reviewer Walk a product manager through why a certain database choice matters Onboard a junior dev by explaining the mental model behind your team’s codebase These moments used to be daily rituals. Now they’re becoming rare.

When the AI already “knows” the answer and can generate a flawless explanation, the incentive to internalize and rephrase, and teach disappears. We copy-paste the AI’s answer, ship the ticket, and move on. The loop of deep understanding → articulation → feedback → deeper understanding is quietly breaking.

Real-World Symptoms I’ve Seen This Year Code reviews turning into “LGTM” fests because no one can explain the clever trick the AI used. System design interviews where candidates freeze when asked to explain their own AI-generated solution without the model’s help. Incident post-mortems that read like polished ChatGPT output but leave the on-call team unable to answer basic follow-up questions. Senior engineers struggling to mentor because they haven’t manually reasoned through a problem in months. This Isn’t Anti-AI; It’s Pro-Craft AI is the most powerful leverage engineers have ever been handed. The goal is not to reject it, but to refuse to let it atrophy the very skills that make us valuable in the first place.

Some practical ways to fight the erosion:

Force yourself to re-explain every AI-generated solution in your own words before merging. Run “no-AI” pairing sessions or design reviews once a week. When reviewing PRs, ask the author to record a 90-second Loom walking through the change without reading the AI comment. Treat prompts as code: store, review, and iterate on them so the human reasoning stays in the driver’s seat. Final Thought Tools amplify skill; they don’t replace it. The moment we forget how to explain the magic is the moment the magic stops belonging to us.

@uxderrick didn’t just write a tweet. He diagnosed a slow-moving crisis in our industry. The good news? It’s entirely within our power to fix—just as long as we keep practicing the uniquely human art of turning complexity into clarity.

Let’s keep building with AI. But let’s never stop teaching like humans.

AboutProjectsSystemsNotesLabContact