After I shipped The AI Symposium, I kept building. Not because anyone asked. The only real feedback was about mobile, which I fixed. I kept going because the Symposium opened up a question I wasn’t done with. Thos Niles put it well: right now the AI interaction pattern is basically a search box that texts you back. I wanted to push past that. What happens when you stop treating the model as a black box and let people into the reasoning?
Most AI products hide everything between the prompt and the response. That makes sense when the point is the answer. But watching Taleb’s antifragility collide with Bostrom’s existential risk, watching Benjamin’s structural critique run into Kurzweil’s exponential optimism, the interesting part was never what anyone concluded. It was how they got there. That part was invisible. So I started building controls to make it visible.
The first control was a comfort toggle. Each thinker starts in comfort mode: loose, exploratory, willing to entertain ideas outside their framework. Turn comfort off and the axioms activate. Hard lines emerge. Taleb stops being conversational and starts calling people frauds. Han stops engaging with hypotheticals and just diagnoses the entire conversation as performance culture. The naming was deliberate. Turning comfort off makes thinkers more themselves, not less. The debate gets harder, sharper, more interesting.
Then I added Six Thinking Hats, de Bono’s method for constraining how someone reasons, not what they reason about. Assign Taleb the White Hat and he can only cite facts and data. No insults, no skin-in-the-game lectures, just evidence. Put the Red Hat on Bostrom and he has to speak from feeling instead of probabilistic reasoning. The constraint doesn’t silence anyone. It reveals something the natural voice hides. Taleb forced into pure data is fascinating. You see what he actually knows versus what he performs.
I also started messing with the debate structure. The original design was a roundtable. Nine voices, equal weight. But some questions don’t want a town hall. They want a fight. So I added a head-to-head mode: two thinkers as primary combatants, the other seven commenting from the sides. Same question, different structure, completely different conversation. “Should AI companies partner with the military?” in roundtable gives you a survey. In head-to-head, with Kurzweil and Wiener at opposing ends, it turns into a real fight about progress versus control.
Then there’s the admin panel. Click any thinker’s node and you see everything: their core axiom, the hard lines they won’t cross, links to their actual books and talks. Thos nailed why this matters. When AI gives you a confident answer, you’ve lost the provenance. You can’t sort signal from noise the way you can with search results, where years of practice let you instantly recognize what’s credible. The Symposium goes the other way. When Zuboff pushes back on data collection, you can read her axiom about surveillance capitalism and decide for yourself whether it applies to your question. The AI isn’t impersonating anyone. It’s modeling published frameworks, and it shows you the model.
AI products that hide their reasoning ask for trust. Products that expose it earn it.
Why do so few AI products let users see or change how the model reasons? I think most AI product design treats the model as a feature. Prompt in, output out, done. But the model’s reasoning is the product. Hiding it doesn’t simplify the experience. It removes the most valuable layer.
Same thing I’ve seen in every design org I’ve led. The best teams weren’t the best-staffed or best-tooled. They were the ones where you could see the reasoning. Why this decision and not that one. What was considered and rejected. What the constraints were. Make the reasoning visible and the output becomes trustworthy instead of just impressive. That’s true for teams. Turns out it’s true for AI too.
What I didn’t expect was how the controls changed the questions. When the Symposium was a black box, people asked broad, safe questions. Once they could configure the thinkers, the questions got sharper. Harder. Designed to exploit the specific frameworks they’d just studied. The controls didn’t just change the output. They changed the input. People who understand the machinery become better questioners. That might be the actual product. (For the record, the Symposium is just an experiment for me to push AI interaction patterns. But I’m learning a lot about what’s possible past the search-box-that-texts-you-back.)
I think this applies way beyond the Symposium. For any AI product where the reasoning matters, the highest-leverage design decision isn’t better output. It’s visible, configurable reasoning. Let people see why the system thinks what it thinks. Let them adjust the weights. Constrain the mode. The output improves because the user improves.
The Symposium started as an experiment in sustained disagreement. It turned into something else. Thos has this line I keep thinking about: AI has no taste and no judgment. It can reference a framework but it can’t feel it. That’s probably true. And it’s exactly why the reasoning needs to be visible. If the machine can’t bring judgment, the least we can do is give people the wiring to bring their own.
Ian Alexander
VP of Design — writing on leadership, AI product strategy, and building teams that ship.