Don't Lead With the Request. Explore First.

Opening a Claude session with your request looks faster. It isn't. The minutes you spend on exploration before instructing are the cheapest of the day.

By

The fastest way I've found to waste a Claude Code session is to start it the way I used to brief a junior engineer over Slack: with the request. Here's what I want, here's the rough shape, go. The output lands quickly. It almost always looks correct. And it's almost always wrong in a way I don't catch until the third file — when I notice the agent has been operating on an assumption I never made, one it inherited from my prompt and then built three follow-ups on top of.

This used to cost me hours a week. It doesn't anymore, and the change wasn't a better prompt template or a new MCP. It was a single habit: for anything bigger than a one-line edit, I now open the session with exploration, not instruction. I ask the agent to look at the problem space, list what it thinks is in play, and surface one or two specific things it would want to verify before changing anything. Then I read what comes back — quickly, sceptically. By the time I tell it what I actually want, we're calibrated. The request lands on a map we share, instead of one I assumed it already had.

What "Leading With the Request" Actually Costs

youthe requestcomponents?assumptions?evidence?AIdriftthe request arrived before the questions did

The failure mode is subtle, which is what makes it expensive. Claude almost never refuses a vaguely-specified task — it fills the gaps from its own model of how things probably work, and the gaps it fills are the ones you didn't notice you'd left. You get back code that compiles. You get back code that even passes the obvious tests. The drift only shows up when you try to extend it, or hand it back to the agent for a follow-up, and the assumption it baked in three turns ago is now load-bearing.

I've watched this happen with:

  • A migration script that "looked right" but assumed a column ordering I'd refactored two weeks earlier. I never mentioned the refactor; the model never asked.
  • A route handler the agent wrote against a URL shape it inferred from the file name. The real route shape lived in a config file two folders away. It got the verbs right and the path wrong.
  • A test suite that confidently mocked the wrong layer of a dependency tree, because the agent had inferred the architecture from a single import statement.

None of these were hallucinations in the strict sense. They were extrapolations. The agent did exactly what I asked, against a picture of the codebase that disagreed with mine in three small places. That's not a model failure; that's a briefing failure. I gave the request before we'd agreed on what was true.

The Habit: Map Before Ask

before the requestcomponents?assumptionsevidenceyouAIcalibration is cheap; recalibration after the fact isn't

The replacement is barely a workflow. For anything non-trivial, the first message of the session is some version of:

Before I tell you what I want — read [these files / this module / this folder]. Tell me what components you think are in play, what assumptions you're making about how they interact, and one or two specific things you'd want to verify before changing anything. Don't propose a solution yet.

Three things happen in the next thirty seconds. One: the agent reads, which loads the actual context window with the actual code, not with whatever it would have inferred from the file names. Two: it produces a short structured response — components, assumptions, things to verify — which is cheap to skim and very easy to spot-check. Three: I notice that one of its assumptions is wrong, or that it's missed a file I know matters, and I correct it in plain text. By the time I describe the task, we're inside the same problem.

The minutes I spend confirming what the agent already sees are the cheapest minutes of the whole session.

This isn't a private trick. It's the part Anthropic's own best-practices doc names directly: the recommended workflow is Explore → Plan → Code → Commit, not jump-to-code. It exists because letting the agent skip exploration tends to produce code that solves the wrong problem. I'd add one nuance from daily use: don't even share your idea during the exploration phase. The agent's framing is most useful when it isn't anchored to yours yet — let it tell you what it sees first. Karpathy's framing of context engineering as the core skill of the LLM era points at the same thing from a different angle: the prompt is the small part; what the model knows when it reads the prompt is the big part. Exploration is how you find out what it knows.

A useful litmus: if you can describe the exact diff in one sentence, skip exploration. For anything else — especially anything that touches more than one file or module — pay the few minutes upfront. The agent gets better calibration; you get fewer surprises three turns in.

Why It Compounds

sharedmodelexplorecalibrateinstructrevieweach pass leaves a richer shared map than the one before

The reason this isn't just nicer manners with the agent is that the calibration accumulates. After exploration, every later message in the session lands against a shared model. When I say "fix the deduplication logic," the agent already knows which file owns it, what the input shape is, and which test covers it. I'm not re-establishing context every turn; the agent isn't re-inferring it either. The session does more work per token and produces fewer drift bugs.

There's a second, quieter compounding: the discipline of exploring before instructing makes me better at the rest of the work too. I notice I now reach for the same move in design reviews and product specs — what are the components in play, what am I assuming about how they interact, what would I want to verify? The agent forced me to externalise a thinking step I used to skip when I was the only one in the loop.

A year of building products with AI in the loop has sharpened one belief for me: the biggest gains haven't come from better models or better prompts. They've come from a quieter discipline — naming the territory before drawing the route. Discovery before instruction. Exploration before everything.

Lead with the question, not the answer. The agent is better than you at the answer. You are still better at knowing what the question is — but only if you ask it out loud first, and let the agent meet you there.