Everyone has opinions about AI. Most of them are performed. You know the pattern: hot take on LinkedIn, a stance designed to generate engagement, positioned just provocatively enough to seem like thinking. I saw one developer post something like that once — can’t remember where — and it stood out precisely because nobody does it. The incentive structure rewards confidence, not honesty. So everyone sounds certain and nobody shows their work.
I build with AI every day. I also think most of what’s being built with AI right now is shallow, fast, and forgettable. Both things are true at the same time. That contradiction is uncomfortable, and I think the discomfort is worth sitting with instead of resolving into a clean narrative. So I built a page that forces me to sit with it. My AI Tenets — six beliefs about AI, each plotted on a spectrum, each connected to something I actually built because of it.
The spectrums are the point. Not “I believe X” — that’s a bumper sticker. “I believe X, and here’s exactly where on the continuum, and here’s how that position shifted since I started building.” The first version of tenet four — “AI scales whatever you feed it” — sat at 35 on a scale where 0 is “speed replaces clarity” and 100 is “speed forces clarity.” I was pessimistic. Speed felt like clarity’s enemy. Then I built the dismissal mechanic — the system that rejects gibberish questions and redirects wrong-fit ones. Watched users actually engage with it. The position moved to 70. Speed can force clarity, but only if the system is designed to demand it. I didn’t plan that shift. The work taught me something I didn’t expect.
Every tenet has a history dot. A hollow circle on the spectrum line showing where the position started, before the building and the conversations refined it. Hover over it and you’ll see when and why it moved. That’s the whole philosophy made visible: your beliefs should have version control. If you’ve been thinking about AI for two years and none of your positions have changed, you haven’t been thinking. You’ve been performing.
The positions that matter aren’t the ones you hold. They’re the ones you’re willing to update.
The hardest one was tenet three: “AI is training us to stop thinking.” I started at 68 — leaning toward “AI replaces thinking.” That felt righteous and clean. Then I built the interjection window, the feature that forces users to participate mid-debate instead of just watching. And I realized the framing was wrong. AI does both. It accelerates thinking AND replaces it, depending entirely on how the system is designed. The same tool in different hands. I moved to 50. Dead center. That’s the honest answer, and honest answers don’t perform well on social media, which is exactly why they belong on a page like this.
There’s a workflow baked into how I use these tenets now. After building a significant feature, I run it through each tenet’s test question. Did building this change how I think about any of them? If yes, the position moves, a history dot appears, and the reasoning gets updated. The page isn’t a manifesto. It’s a living document that exposes how my thinking evolves in response to what I actually build. The transparency is the credibility.
Most AI discourse is people arguing about the future from fixed positions. I wanted to do something different. State the positions. Show the spectrum. Show where they started. Show when they moved. Connect every belief to a real product decision that either confirmed it or forced me to update. The page does to me what I’m asking AI to do in the Symposium — show the reasoning, show where I stand, show when I’ve changed my mind.
I don’t know if these six tenets are right. I’m fairly sure at least two of them will move again in the next few months as I keep building. That’s the point. The positions that matter aren’t the ones you hold. They’re the ones you’re willing to update.
Ian Alexander
VP of Design — writing on leadership, AI product strategy, and building teams that ship.