I built something different than I thought I would
โ I built something different than I thought I wouldA few weeks ago, I joined a hackathon with one goal: rethink how people manage their organic produce box subscription. If you've ever had one of these, you know the experience. Every week, a box of fruits and vegetables shows up. Sometimes you love it. Sometimes you get fennel for the third week in a row and wonder if anyone's paying attention. The current way to manage it? Log into a website, scroll through a list, manually swap items, maybe set a vacation hold through a form that looks like it was built in 2011. I wanted to build something different. An agent that actually knows you. It shows you what's in this week's box as visual cards, not a text list. It lets you swap items with a tap. It notices that you've replaced eggplant three weeks running and stops suggesting it. It pulls up a vacation date picker when you mention you're traveling next month. It even suggests recipes based on what's in your box and generates a cover image to go with them. Not a chatbot that describes your produce. An interface that adapts to what you need, right when you need it. I'd been reading about Google's A2UI protocol and figured that's what I was using. Agent describes UI, client renders it natively. Made sense. So I built it. And it worked. The demo landed well. But here's the thing. After the hackathon, I started digging into the actual A2UI spec. And the more I read, the more I realized: what I built wasn't A2UI at all. A2UI is a protocol. It standardizes how agents describe UI intent in a cross-platform format, so a remote agent can send a UI payload to any client, whether that's a web app, a mobile app, or a Flutter widget. What I built was closer to Tambo or Crayon. I registered a set of components (product cards, swap interfaces, recipe cards, a date picker). The agent picked from them based on conversation context and filled them with data. No protocol. No cross-platform standardization. Just smart component selection. And honestly? For a hackathon demo, that was the right approach. Fast to build, easy to control, and the result felt like a real product. But I had the terminology wrong. And I don't think I'm the only one. Why this matters for youIf you're building anything with AI right now, or thinking about how your product should evolve, the phrase "adaptive UI" is everywhere. It sounds like one thing, but it's actually several. There's the component layer: predefined UI elements that an agent picks from. This is what most people need and what tools like Tambo and Crayon solve. Simpler than you'd think, and already a massive upgrade over plain text responses. There's fully generated UI: the agent creates the interface from scratch, on the fly. Think V0 or Claude Artifacts. Powerful, but expensive, slow, and hard to keep on brand. And then there's the protocol layer: A2UI, MCP Apps, and others that standardize how agents describe and deliver interfaces across platforms. This is where things get interesting for multi-agent systems or cross-platform products. The confusion I had at the hackathon, thinking I was using A2UI when I was really doing component selection, is exactly the kind of thing that slows teams down. You pick the wrong tool because the terminology is fuzzy. Or you overbuild because you think you need a protocol when a component library would do. Go deeperSo I did what I always do when something confuses me. I wrote it up. ๐ New blog post: "Adaptive UI: The Missing Layer in Agentic AI" breaks down the three layers, the tools behind each, and how to decide what your product actually needs. I also wrote a blog post last week making the case that the chat era is ending and agentic experiences are taking over. If you missed it, it pairs well with this and even has a nice demo to make it even easier to understand. If you're wrestling with similar questions in your own product, or if you just want to tell me I should stop replacing the eggplant, hit reply. I read every one. โ |