After WIMP: The Interface Paradigm Agents Actually Need
The gateway and system bus already operate post-WIMP — no menus, no pointers, just structured intent and event contracts — which maps directly to this paradigm shift.
Sunil Pai — previously React core at Meta, now deep in Cloudflare Workers and AI tooling — writes about what interfaces look like when we stop assuming a human is on the other end. WIMP stands for Windows, Icons, Menus, Pointer — the paradigm that has dominated computing since Xerox PARC in the 1970s and went mainstream with the 1984 Macintosh. Every UI convention we have — hover states, dropdown menus, modal dialogs, breadcrumbs — was designed around the constraints of human visual perception and hand-eye coordination. The post-WIMP question is: what happens when the user doesn’t have eyes or hands?
The instructional design angle is what makes this more than a hot take about chatbots. Instructional design has always been about understanding how a learner processes information and designing the delivery medium to match. Humans need chunking, visual hierarchy, progressive disclosure — all the scaffolding WIMP provides. Agents process structured data, follow schemas, and don’t need affordances to discover actions. The pedagogical parallel is sharp: you wouldn’t design a textbook and a machine-readable API the same way, even if they’re conveying the same information.
Post-WIMP HCI research has existed for decades — touch, gesture, voice, spatial interfaces all break from the strict WIMP model. But agents break it more fundamentally. A touchscreen still has a human on the other side. An agent interface has nobody. That means every convention we’ve accumulated — loading spinners, error dialogs, confirmation prompts — is dead weight. The right abstraction is closer to what REST and JSON-RPC gave HTTP: structured contracts that don’t care about rendering.
Key Ideas
- WIMP exists to bridge human perception and machine state — menus externalize available actions, icons compress semantic meaning, pointers translate analog motor control to digital precision. None of these bridges are necessary when the consumer is an agent.
- Instructional design as a lens: the same principles that tell you to use worked examples for novice learners tell you something different about what agents need — schemas, examples in training data, clear action surfaces, not progressive disclosure.
- Sunil Pai sits at an interesting intersection — his Cloudflare Workers work lives at the layer where interfaces become infrastructure, making him well-positioned to think about what post-WIMP looks like in practice.
- The pattern is already visible in how agent tools are evolving: function calling, MCP, HATEOAS JSON responses — all of these are post-WIMP by design, even when nobody names them that.
- Agent interfaces should probably look more like CLI design — composable, explicit, machine-readable — than like any web UI pattern we’ve inherited.
Links
- After WIMP — source article
- Sunil Pai’s site — author
- WIMP computing — Wikipedia
- Post-WIMP — Wikipedia
- Model Context Protocol (MCP) — a concrete post-WIMP agent interface standard
- Cloudflare Workers AI — Sunil’s current context
- Instructional Design — Wikipedia
- HATEOAS — the REST constraint that makes APIs self-describing, a post-WIMP pattern for machines