Uxtopian
All articles

Answers Aren’t Understanding

Greg Satell wrote something recently that stopped me. The business stories we tell each other about Blockbuster ignoring Netflix, Kodak ignoring digital photography. They aren’t really true. They’re simplified and satisfying and wrong. We accept them because the feeling of understanding shows up before the actual understanding does. Your brain decides something makes sense before it checks whether it does.

AI does the same thing at industrial scale. Ask a question, get a confident, well-structured answer in three seconds. It has paragraphs. It has reasoning. It even has citations. It feels like understanding. But try to build on it. Ask the follow-up question it didn’t anticipate. That’s where the depth isn’t.

There’s a difference between knowing something and understanding it. Knowing is retrieval. The right fact at the right time. Understanding is what survives a real conversation. It can handle the follow-up. It knows what it’s leaving out. It can tell you where it might be wrong. Wikipedia knowledge is useful for the same things game shows are useful for: pattern matching under time pressure. Nobody ever solved an organizational problem by being fast on the buzzer.

The thinkers I keep returning to didn’t have faster answers. They had slower questions. Byung-Chul Han sat with the concept of burnout long enough to see that it wasn’t a personal failing but a structural one. Taleb didn’t just notice that predictions fail. He built an entire epistemology around what we don’t know. That kind of thinking doesn’t come from reading the summary. It comes from sitting with an idea long enough for the idea to argue back.

The thinkers I keep returning to didn’t have faster answers. They had slower questions.

The trap isn’t speed. Speed is fine. The trap is that AI makes shallowness feel productive. You can read the summary of a framework in thirty seconds, reference it in a meeting, drop it into a strategy doc. You sound like you understand it. But the moment someone asks “why that framework and not this one?” the depth isn’t there. You retrieved the conclusion without doing the thinking that produced it.

This is partly why I built the Symposium the way I did. Not to get faster answers. There are plenty of those. I wanted a space where ideas have to survive contact with other ideas. When Taleb’s antifragility collides with Han’s burnout society, neither framework gets to be the clean answer. They have to negotiate. The friction is the point. The Symposium doesn’t tell you what to think. It makes you sit with the tension long enough that something deeper emerges. A toy that creates the conditions for understanding by refusing to let you stop at the first good answer.

Same thing I’ve seen in every design org I’ve been in. The leaders who actually changed things weren’t the ones with the most frameworks memorized. They were the ones who’d done the thinking. Who could tell you not just what the research showed but what it couldn’t show. Not just what the framework says but where it breaks down. That depth isn’t a luxury. It’s the difference between someone who presents well and someone who can actually navigate when the plan falls apart.

AI gave us infinite answers. That was supposed to be the hard part. Turns out the hard part was always the thinking that the answers were meant to serve. What happens when a generation with access to every answer loses the habit of sitting with a question long enough to understand it?

IA

Ian Alexander

VP of Design — writing on leadership, AI product strategy, and building teams that ship.