Dead Ends, Agent Loops, and the Case for Human-Centered Failures

Why product teams should design for failure, not just the happy path—lessons from agent loops and a frustrating Uber Eats experience.

Illustration showing a frustrated human user and a confused AI robot agent trapped in an endless loop, representing the problem of AI systems that lack proper failure handling and exit strategies

One of the most interesting books I've ever read about user experience is The Design of Everyday Things. There's a part in it that really stuck with me: assume that systems fail. Not "if" but when. We should plan for those failure paths and design our systems around them in a way that's human-centered.

That idea feels even more relevant today, in the age of Agentic. Because let's be honest—how many times do we see users getting stuck in agent loops that no one thought to cover? No handler. No exit. Just a dead end—leaving the user blocked and confused.


A Support That Needed Support

In this Agentic boom, we're all seeing the same pattern: systems are great at handling the "happy paths" but fall apart the moment something unexpected happens. A user asks something slightly off-script, the agent doesn't know what to do, and suddenly they're stuck in an endless loop. No handler. No way out. Just confusion.

That's the thing—when we design only for the best-case scenario, we forget that people don't always behave like agent graphs assume. And when the system fails, it's the human who pays the price in frustration.


The Uber Eats Loop

A couple weeks ago, I ordered something on Uber Eats. After paying all the delivery fees, service fees, and every other fee they could squeeze in, the order got lost. So, naturally, I turned to support.

And here’s the problem: the support was fully agent-driven. It couldn't understand my problem, and I ended up in an infinite loop—just going in circles with no resolution. In a human-centered product, the assumption would be: if something can go wrong, it will go wrong. And when it does, the design should give me an exit.

After the first failed attempt, all it would've taken was a simple thumbs up/thumbs down: "Was this helpful?" If I hit no, then just give me a button to talk to a human. Yes, that comes with an operational cost. But what's the point of having "support" if it doesn't actually support you?


If It Can Go Wrong, It Will Go Wrong

This is where product folks need to pause and ask themselves a simple question: what happens when this fails? Not "if" — when.

There are three things every system should be ready for:

  1. Catch the failure quickly. Don't let users spin in loops forever.
  2. Communicate clearly. Let people know what's wrong instead of hiding it in vague error messages.
  3. Have a human in the loop. Give users a real way out, even if it's rare or costly.

Because even if these failures only hit 1% of cases, that 1% matters. Respecting users means planning for them too—not just the happy paths your agent flows assume.


Design Human-Centered Agents

The real test of design isn't how smooth the happy path is—it's how you treat people when the path breaks.

So, product folks: don't just design for when everything works. Assume failure. Plan for it. Deliver with humans in mind. Because the moment users hit a dead end, what they'll remember isn't the 99 times your system worked—it's that one time it didn't, and how you handled it.