The Future of Frontend Is Quietly Changing
AI agents and conversational interfaces are reshaping frontend. A reflection on where UI attention shifts as interfaces move from starting point to byproduct of intent.
It's a Friday evening. I'm finishing a Next.js frontend deployment for an AI agent I built for a research team. The product works, but I've still spent hours polishing UI details — spacing, alignment, tiny visual decisions. As I fix the last issues from the product manager, a familiar question comes up again: does this still matter as much as it used to? It's a thought that's returned quietly, and repeatedly, throughout 2025.

I've spent years building products, especially on the frontend side. Now I'm thinking about which parts are going to exist from now on. But the first question we need to answer is: why do we need frontend? What's the core job it does that nothing else can? And if that job changes, what happens to everything built around it?
Frontend Has Always Been About Interaction
Frontend has always been about interaction. Screens were simply the most effective interface we had. They imposed structure, constraints, and a shared language — buttons, forms, flows. Think about a checkout page: the layout guides you through steps, the form fields constrain your input, and the submit button makes the action explicit. That structure wasn't accidental. It was the interface doing its job — translating intent into action through clear, predictable patterns.

What's changing isn't the goal. It's the medium. With AI, our interaction with machines is shifting from navigating interfaces to expressing intent directly. The machine understands what we want, not just where we click.
In a growing class of products, intent now matters more than navigation. Instead of opening an app and finding the right path, users express what they want and receive a response. AI understands your intent and can address it directly. But to do that, it still needs to communicate with you—and that means generating UI. Nothing fundamentally changes about what UI does. It's still the bridge between intent and action. The difference is when and how it gets created: on demand, based on what you asked for, rather than pre-designed as a fixed path.
AI Agent-Generated Interfaces
By 2025, basic UI generation is already practical. Tools like Vercel v0, GitHub Copilot, and Figma AI can produce usable interfaces from high-level input. Given constraints, they work. They're not especially creative — but they're functional.
AI still struggles with taste, empathy, and contextual judgment. UX remains deeply human work. Where models perform well today is execution inside constraints: assembling functional UI from known components and rules.
What's changed in 2025 is the infrastructure layer. We're not just generating static HTML anymore. We're seeing protocols that let agents generate interactive, stateful interfaces that integrate seamlessly. The UI becomes a message format, not a design artifact. Agents send structured descriptions of what should appear, and the host environment — like ChatGPT — renders that UI for you. If you're an agent or an MCP tool, ChatGPT as the host can render the interface you describe.
Some early signals are already visible. Conversational and agent-based environments can render lightweight interfaces — forms, tables, charts — directly inside flows. These aren't full applications, but they point toward a model where interfaces are assembled on demand, based on intent, rather than designed upfront as fixed screens. In that direction, products start to look more like capabilities than screens.

The Protocol Layer: A2UI and Declarative UI
By late 2025, we're seeing the emergence of protocols that formalize this shift. A2UI (Agent-to-User Interface), an open standard from Google, represents a concrete example of where this is heading. Instead of agents sending HTML or JavaScript, they send declarative JSON that describes UI components. The client renders these using its own native components — React, Angular, Flutter, SwiftUI — whatever the platform uses.
The protocol works because it treats UI as data, not code. Agents can only request pre-approved components from the client's catalog, which means security comes from the declarative format itself. The same A2UI message renders natively across web, mobile, and desktop because the agent describes intent, and the client maps it to native widgets. And because the protocol is LLM-friendly, agents can stream UI updates as they generate them — forms appearing field by field, charts populating as data arrives.

What This Means for Frontend Work
When UI becomes a protocol, the work shifts. You're no longer designing every screen upfront. Instead, you're building the system that can generate those screens on demand. The protocol handles the structure; your job is to make sure what gets generated feels right.
This changes where frontend effort goes. Less time on pixel-perfect layouts for every screen. More time on:
-
Component catalogs that agents can reference. What components are available? What are their capabilities? How do they map to agent intents?
-
Design system integration so agent-generated UI matches your brand. The protocol handles structure; you handle styling, accessibility, and polish.
-
State management for dynamic, agent-driven interfaces. How do you handle data binding when the UI structure arrives as messages?
-
Performance optimization for streaming UI updates. Progressive rendering means thinking about what appears first, what can wait, and how to handle partial updates gracefully.
The work doesn't disappear. It shifts from "designing screens" to "designing systems that can generate screens on demand." And that shift changes what you prioritize from day one.

A Planning Lens

So back to that Friday evening question: does this still matter? The answer is yes, but differently. Frontend's core job hasn't changed — it's still about interaction, still the bridge between intent and action. What's shifting is when and how it gets created. Instead of designing every screen upfront, we're building systems that can generate screens on demand. Instead of owning the interface, we're thinking about how our product shows up as actions, cards, or workflows inside other interfaces.
This doesn't apply to all products. Those with more focus on branding and creativity will always need the human touch, the intentional design decisions that make something feel right. But for a growing class of products, the shift is already happening.
If you're building AI products today, ask yourself: can an agent use your product directly? Can it live inside another interface? Does it still make sense when the UI isn't yours? The future of frontend isn't the disappearance of screens. It's a re-balancing toward interaction and intent. UX remains human. UI still matters — but increasingly as an expression of decisions made elsewhere, not the place where they begin.
And in a world where AI moves fast, who knows? Maybe we'll find other ways to interact with AI that we haven't imagined yet.