uxtopian
DESIGN LEADERSHIP + AI

What AI Reveals About Loyalty, Risk, and Modern Work

The Mismatch We Keep Normalizing

This piece was prompted by a recent comment from Ryan Roslansky, CEO of LinkedIn, who suggested that five-year career plans no longer make sense in an AI-accelerated world. I agree with the premise, but it felt incomplete. Paired with Cornel West’s long-standing thinking on work, dignity, and moral responsibility, it raised a deeper question for me: if systems now operate on short horizons by design, why do we still ask individuals to organize their lives around long-term loyalty?

Over the last decade, the way organizations operate has quietly changed, most of us felt it long before we had language for it. Planning cycles shortened. Decision loops tightened. Expectations began resetting faster than most humans can realistically recalibrate. This isn’t a critique of intent or values. It’s a description of how modern systems behave under pressure.

AI didn’t cause this shift. It exposed the underlying incentive structure by dramatically lowering the cost of change, iteration, and reversal across nearly every layer of work. What once unfolded over years now happens in weeks. Direction shifts without ceremony—unless a ceremony is a Slack thread. Roles evolve midstream. Systems are rewarded for flexibility. Humans are still asked for loyalty.

Most of us feel the tension. The question is why we keep pretending it isn’t there.

How Modern Organizations Actually Operate

Modern organizations operate on short time horizons. This isn’t a failure of values; it’s how they’re designed.

Venture-backed companies operate within finite fund lifecycles,typically ten years, with meaningful pressure for exits within three to five. Public companies live quarter to quarter, shaped by earnings cycles, analyst expectations, and executive compensation tied closely to near-term stock performance. Success is measured within narrow windows. That pressure is real, but it doesn’t land evenly.

Employees are asked to align against rushed KPIs, deliver results this quarter, and demonstrate long-term belief in missions they don’t control. Deferred rewards like equity, progression, and stability are positioned as justification for patience and endurance. In practice, we often shake hands knowing neither side expects permanence, yet perform the ritual as if it were a forever marriage.

Organizations may be constrained by markets and capital, but they retain the option to revise direction, reset goals, and reallocate risk. Individuals are asked to commit without access to those same levers. These timelines don’t align. That doesn’t make companies dishonest or employees disloyal. It makes the system incoherent.

Organizations are rewarded for optionality and rapid adjustment. Individuals are asked for continuity and long-term investment of time. When conditions change, the system absorbs flexibility. Individuals absorb disruption.

AI didn’t create this mismatch it just painted it fluorescent orange and made it impossible to ignore.

Deferred Promises in Short-Horizon Systems

During hiring and onboarding, organizations often signal long-term investment through benefits and incentives that assume multi-year tenure: equity vesting schedules, career ladders, bonus plans, and extended benefits packages. These signals are meant to communicate stability and mutual commitment.

Many of these mechanisms only deliver meaningful value if an employee stays long enough, while the system itself is optimized for short horizons. Stock options typically vest over several years and often include short post-employment exercise windows. In practice, only a minority of employees ever realize the value implied during hiring. (An industry peer / friend of mine shared he has stocks from 11 companies he has worked for FT or taken as part of consulting work - zero have produced profits.)

Some companies now offer early exercise to accelerate ownership, but that shift often transfers risk to employees through upfront cash outlay. Other benefits, like life insurance—exist only while employed. Bonus plans are frequently ambiguous, with outcomes that depend on factors well beyond an individual’s control.

Taken together, these aren’t bad-faith tactics. They’re artifacts of a system signaling long-term commitment while operating on short-term constraints. The result is a growing gap between what’s implied during hiring and what’s structurally likely to occur.

That gap doesn’t create trust or distrust on its own. It creates uncertainty borne disproportionately by individuals.

Ambiguity Isn’t a Skill. It Is a Cost Allocation Strategy.

Leadership often celebrates the ability to “lead in ambiguity” as a sign of maturity. I don’t disagree. Ambiguity is real. Markets shift. Information is incomplete. Leaders rarely have full clarity. The problem isn’t ambiguity itself. It’s how unevenly it’s distributed.

That asymmetry shows up most clearly during strategic reversals. Entire functions can be deprioritized not because the work failed, but because direction changed. The retrenchment from long-horizon investments in virtual and augmented reality at Meta is a clear example. Thousands of people aligned their careers to a long-term vision and delivered against it. When leadership reassessed priorities, those roles became non-core overnight.

The uncertainty was real for everyone. The cost was not.

The organization absorbed financial risk over time. Individuals absorbed immediate, personal consequences.

In short-horizon systems, clarity is expensive. It requires trade-offs, prioritization, and visible commitment. Ambiguity preserves optionality. Over time, it becomes structurally useful. But it’s worth naming what that means in practice: ambiguity reduces exposure for decision-makers while increasing exposure for those executing decisions.

This isn’t just philosophical. Decades of research show that role ambiguity correlates with:

Clarity isn’t a comfort preference. It’s an operational requirement.

What’s missing in most organizations is not certainty, but minimum viable clarity: a shared understanding of what is known, what remains unresolved, where judgment is expected, and where alignment is non-negotiable. Without that baseline, ambiguity stops being a condition to work within and becomes a mechanism for shifting risk. People aren’t asking for guarantees. They’re asking for enough clarity to understand the bet they’re being asked to carry. 

Two narratives tend to follow. In one, leadership understands the direction, and employees are expected to execute despite uncertainty. In the other, leadership acknowledges the direction is still forming and reframes that absence as the reason people were hired to “figure it out.”

AI has made this dynamic louder. Teams are told to adapt, unlearn, and relearn as workflows shift toward AI-first models. Speed and tool fluency become proxies for relevance. The implicit expectation is that individuals will derive clarity through action while definitions of value keep moving. What gets lost is the difference between adaptation and accountability. Accelerating expectations without clear constraints transfers risk downward. The system stays flexible. The individual carries the uncertainty.

When people ask for clearer boundaries or decision context, that request is often misread as resistance—when it’s actually an attempt to do the work responsibly. This isn’t an argument against ambiguity. It’s an argument about who pays for it.

Bets Without Symmetric Ownership

Product organizations often describe work as a series of bets. That language is meant to be honest. A bet acknowledges uncertainty and the possibility of failure. But in a real bet, three things are clear: who places it, what’s at stake, and who absorbs the loss.

Inside organizations, those lines blur. Leadership defines the bet. Teams execute it. Individuals often carry the consequences. When a bet succeeds, the outcome is collective. When it fails, the impact becomes personal: stalled progression, reputational damage, burnout, and exit. Even when leaders face consequences, they’re usually delayed or cushioned by role mobility.

Without explicit ownership of downside, “bets” become another way ambiguity is normalized while risk is pushed downward. Learning is celebrated rhetorically. Accountability is rarely symmetrical. Ambiguity requires design. Without minimum viable clarity, what’s known, what’s still open, where judgment is expected versus alignment required, ambiguity doesn’t feel like empowerment. It feels like exposure.

Employees don’t place individual bets. Bets are framed as collective and leadership-vetted. Yet the downside rarely stays collective. Even when misalignment is flagged early, the consequences of failure still land personally.

Efficiency Without Time: A Repeating Pattern

None of this is new. Across transportation, industry, and information work, efficiency has increased without reliably returning time or stability to individuals. What changes isn’t how much work exists, but how quickly expectations reset.

Historically, efficiency gains were treated as capacity to redeploy, not surplus to redistribute. Faster transportation expanded distances. Household technology raised standards. Digital tools compressed execution while multiplying coordination and availability. The internet made a time saving promise to us but engulfs 40% of our waking hours. Work changed shape. It rarely shrank. Reductions in working hours came through policy and negotiation, not productivity alone. Absent constraints, efficiency was absorbed by the system.

In growth-oriented systems, efficiency doesn’t feel like relief. It feels like acceleration. Gains accrue structurally. Human costs remain externalized.

AI enters this pattern not as a rupture, but as an accelerant. Framed this way, AI is also a bet. Organizations are making directional bets on new workflows, roles, and definitions of value before those shifts are fully understood. The upside is discussed openly. The downside less so.

AI as an Accelerant, Not a Rupture

Studies suggest generative AI can raise task-level productivity by 15–20% for certain knowledge tasks. However, overall time savings remain modest without structural change:

In short-horizon systems, these gains don’t create slack. They reset expectations. Velocity becomes a proxy for value. The ability to change direction quickly benefits those who control direction. Human judgment, synthesis, and sense-making don’t scale at the same rate. The bottleneck moves. It doesn’t disappear.

We’ve seen this pattern before. Many leaders pressed for Apple-level polish without changing the organization to support it. Now it’s AI. Organizations look at a handful of “Midas touch” AI cases, companies that made clear bets and fully committed then assume the same outcome is possible without the same clarity or commitment.

When these expectations don't payoff like they did with committed orgs the responsibility lands on a role, not the organization. Missed bonus, slowed promotions or at the extreme layoffs and being managed

The Rational Employee Response

We can see the impact in tenure. Median employee tenure in the U.S. sits under four years. Senior roles turn over even faster.

Forever roles and five-year vesting schedules increasingly collide with real tenure data:

  • VP of Design: ~2.1 years
  • VP of Product: ~2.3 years
  • CRO: ~1.8 years
  • CMO: ~1.8 years

Layoffs, reorganizations, and leadership churn reset tenure regardless of performance. In that environment, long-term attachment no longer reduces risk. Employees respond by bounding commitment, prioritizing learning, and seeking clarity early. On day one of a new job, the implicit clock starts: what can I learn, who can I build relationships with, and what options does this role create next?

That isn’t disengagement. It’s alignment with reality.

Toward More Explicit Contracts

If loyalty is now conditional and time-bound, work needs to be designed accordingly. This isn’t a call for less commitment. It’s a call for more honest contracts between systems and the people inside them.

At senior levels, the work often isn’t to rush toward solutions, but to name the system as it exists: here’s what we’re optimizing for, why it keeps producing this outcome, what remains unclear, and what decisions would need to be made before any solution would be credible. That includes making bets more explicit and more equitable, clarifying who is placing them, what success and failure actually mean, and who absorbs the downside when direction changes. Without that clarity, risk defaults downward, regardless of intent.

Organizations should make time horizons explicit, distribute ambiguity intentionally, and stop using loyalty language to mask optionality. Individuals, in turn, should treat roles as bounded engagements, optimize for learning over promises, and seek minimum viable clarity early. None of this removes uncertainty. It makes the terms under which uncertainty is carried visible. Until the language of commitment reflects the structure of work, loyalty will continue to erode, not as protest, but as adaptation.

Design is Ambiguity

For me, this is where product and design leadership matter most. Not by eliminating ambiguity, but by designing for it, making time horizons explicit, clarifying ownership of bets, and establishing minimum viable clarity so teams understand the risk they’re carrying. The work isn’t to promise certainty, but to prevent avoidable exposure. When systems are honest about how they operate, people can engage fully without mistaking loyalty for blind faith.

In an AI-accelerated system where change is cheap and reversals are easy, clarity, not certainty, becomes the shared responsibility that allows both organizations and individuals to take real bets without mistaking ambiguity for trust.