We're more patient with AI than one another
I’ve been thinking about how quickly we’ve adapted to working with AI. We all understand the deal. If the output is bad, it’s probably on us. The prompt was vague. The context was missing. We didn’t give it enough constraints.
So we revise. We clarify. We try again.
No frustration. No judgment. Just iteration.
What’s strange is how little of that generosity we extend to each other. Somewhere along the way, we learned to treat machines as systems that need better inputs—but we still treat humans as if they should just know. And when they don’t, we judge competence, take it personally, make assumptions, or shut down.
That gap keeps showing up for me, especially as a product and design leader working at the intersection of people, systems, and AI. It’s also what pulled me back to The Four Agreements by Don Miguel Ruiz, not as a spiritual guide, but as a surprisingly practical framework for modern work.
1 - Be impeccable with your word
AI takes us literally. We’ve all learned that the hard way. If we’re unclear, the result reflects it. With humans, though, we’re often loose with language. We imply instead of saying. We soften when we should be direct. We use cleverness when clarity would do more good. Other times, we’re blunt when a little compassion would go further.
Then we act surprised when things go sideways. Projects stall. Culture scores dip. Attrition rises. Burnout spreads.
Being impeccable with your word doesn’t mean being rigid or verbose. It means saying what you mean and owning your intent. Words shape outcomes. In leadership, especially product design leadership, language is one of the primary tools we have. It can create alignment or quietly destroy it. (Clarity = kindness.) Unclear inputs produce fragile results. That applies to people, too.
2 - Don’t take anything personally
When AI pushes back or gives us an answer we don’t like, we don’t get offended. We treat it as information, we pause and 'hmm'. When a human does the same thing, it suddenly feels different. Feedback feels loaded. Questions feel like challenges. Disagreement feels like judgment. Most of the time, it’s none of those things. Ruiz’s point here is simple and uncomfortable: what other people say and do is usually about their context, pressures, incentives, and worldview. It’s rarely about you.
In product organizations, this shows up constantly. Engineering resistance. Executive skepticism. User frustration. These aren’t personal attacks. They’re signals. Often, they’re downstream effects of unclear strategy, misaligned incentives, or someone simply having a bad day. AI has made us better at separating signal from ego.
Hot tip: we’d be better leaders if we did the same with people.
3 - Don’t make assumptions
AI punishes assumptions immediately. You think it understands what you meant. It doesn’t. So you ask better questions. You add clarity. You iterate. With humans, we skip that step.
We assume intent. We assume competence, or lack of it. We assume alignment that was never explicitly established. Assumptions feel efficient. They move things forward—until they don’t. And when they fail, they’re expensive. They create friction, erode trust, and slow teams down in ways that are painfully obvious (See: features that don't clearly solve user or business needs). And hard to see in the moment. (See: almost every Zoom call.)
Strong leaders replace assumptions with curiosity. When in doubt, go Socratic. They ask before they conclude. They clarify before they judge. Not because they’re soft, but because outcomes make or break products, teams, and careers.
Working with AI has trained many of us to be better question-askers. We just haven’t fully applied that skill back to human systems yet.
4 - Always do your best
AI doesn’t judge effort. It just responds to what you give it. Treat it like a quick Google search, and you’ll get mediocre results. Connect it to an MCP server, give it real context and a strong prompt and it shines.
Humans, on the other hand, tend to measure each other against a fixed, often invisible standard. We forget that “your best” isn’t constant. It changes with experience, context, pressure, health, life, and timing. Doing your best isn’t about grinding harder or pretending everything is fine.
It’s about showing up with honesty, care, and effort—and extending that same grace to others.
The uncomfortable takeaway
We’ve learned how to collaborate with machines faster than we’ve learned how to collaborate with each other.
AI has taught us to:
- be clearer
- be less reactive
- iterate instead of blame
- separate intent from outcome
The opportunity now is to bring those behaviors back into our human relationships, without losing empathy. If we can patiently refine prompts for a system with no feelings, we can afford to be a little more patient with the people building alongside us (and AI).
That might be the real work of leadership in this next era.
If your organization needs assistance balancing AI + Design/Product give me a shout.

