Coding Was Never the Job
AI didn't take the job. It made it obvious that writing code was the smallest, most replaceable part of what I was actually hired to do.
The question I've been getting, in one form or another, for most of the past year is whether I'm scared. It comes from engineers I've worked with, from friends in tech, from people who corner me at meetups after watching a demo of Claude finish an API in under two minutes — schema, handlers, tests. The question is rarely about me. They're asking it about themselves.
I sat with it the first few times longer than I expected to. What I noticed surprised me: I wasn't scared. I was relieved. The part of the job I'd been doing reluctantly for years had finally found a better tool, and the part I actually cared about had more room than it ever did.
That answer probably reads as either denial or smugness on a first pass. It isn't. I've been delegating most of my implementation to AI for over a year now, across every project I'm currently shipping, and the throughput is up, the quality is up, and the parts of my day I'd flag as "the actual work" haven't moved. They've grown.
The Visible Tenth
A senior software engineer doesn't get hired because they can write code. They get hired because someone, somewhere, has a problem they can't articulate yet, and they need a person who can articulate it for them and then make sure the resulting system actually solves it. Code is one of the artifacts that comes out the other end. It is genuinely not the thing.
What I've been delivering for the last decade — and what AI is conspicuously bad at — looks like this:
- Pulling the actual problem out of a brief that describes symptoms. Most clients don't know what they need; they know what hurts.
- Knowing what not to build. The most expensive thing in software is the feature nobody used. I've talked clients out of entire product directions because the simpler path was right there and nobody had drawn it.
- An architecture that survives contact with the team that has to live in it. Not the cleanest one on paper — the one that the existing humans, on the existing roadmap, can extend without quietly hating you in six months.
- The judgment call at the fork where two reasonable paths diverge. AI gives me both options in a list. It doesn't bear the weight of which one we pick.
- Translation between business and system. When Adidas says "real-time," they don't mean what a backend engineer means. Living at that seam is most of the job.
None of this got cheaper. Some of it got more important, because the cost of building the wrong thing collapsed. When implementation was slow, the wrong call cost a quarter. When implementation is hours, the wrong call still costs a quarter — because the wrong thing got built faster.
The Rung Below the One You're Standing On
This isn't new. Every abstraction layer in computing has done the same thing — eliminated one category of manual work and expanded the leverage of everything above it. Assembly gave way to C. C gave way to managed runtimes. Managed runtimes gave way to frameworks. IDEs, then Stack Overflow, then Copilot. Each time, a generation of engineers panicked. Each time, the people who thrived were the ones who understood what they were actually doing — not the syntax, but the thinking underneath the syntax.
Abstraction eliminates one category of manual work and expands the leverage of everything above it.Karpathy described his own shift around late 2024 — from writing most of the code himself to delegating most of it to agents and reviewing the output. He didn't frame it as loss. He framed it as a workflow that finally let him move at the speed he was thinking. That description matches mine.
GitHub's own research on Copilot users — which is worth reading with some healthy skepticism, since they're not a neutral party — claims developers complete tasks meaningfully faster with AI in the loop and that AI authors a sizeable share of the code that gets written. The exact numbers are less interesting than the direction. The direction is real. The compression is real. The question is what you do with the compressed time.
If your value lived in the rung below AI — if it was mostly typing speed, syntax recall, the ability to translate a clear mental model into characters quickly — then yes, AI is a problem. If your value lived in the rungs above — judgment, system shape, knowing what shouldn't get built — then the compression is leverage, not displacement. I wrote about the system-coherence side of this in AI Writes the Code. Someone Still Has to Architect It. — the local code is fine, the seams between modules are still where humans earn their keep.
The honest test: when you remove the typing, is there a job left? If yes, AI is an upgrade. If no, the gap has been there for a while — AI just made it visible.
What the Day Actually Looks Like Now
The biggest change in my day-to-day isn't that I write less code. It's that I spend more time before the code starts.
I'm more deliberate about problem definition than I've ever been — I write out, in plain language, what I'm actually trying to solve before I give the agent a single token of context. That discipline has made me a better engineer, not a lazier one. The clearer the upstream thinking, the better the downstream output. The feedback loop rewards clarity in a way it never did when I was the one typing.
Architecture stays mine. The agent doesn't make structural decisions; it works inside the structure I've set. When I haven't thought clearly about the system, the output reveals it — fast and unkindly. That's a useful mirror.
Review is where the craft lives now. Reading AI-generated code is a different mode of attention than writing it. I'm not checking syntax. I'm checking intent. Does this reflect the decision I made? Does it bend the system in a direction I meant? That kind of review is harder than writing the code in the first place — but it's also more leveraged, because one clean review fixes a thousand future problems.
And I'm running more projects in parallel — not because I'm stretched thinner, but because the implementation overhead that used to bottleneck me is gone. The constraint moved from "how fast can I type" to "how many systems can I hold in my head at once." The second constraint binds slower than the first did.
The uncomfortable part — and there is one — is that for years the industry hid the gap between people whose value was upstream and people whose value was the typing. There was always enough implementation work to keep both kinds of engineer busy. AI collapsed that buffer. It didn't create a new weakness; it revealed one that was already there. The answer isn't to type faster. It's to move further upstream.
What got replaced was always the least interesting part anyway. The actual job — understanding the problem, making the call, holding the system coherent — that part is more yours than it has ever been.