The prompt-box paradox

Insights

The prompt-box paradox

Why AI needs ephemeral interfaces

By

Doug Cook

8

Oct

2025

Andrej Karpathy recently noted that interacting with ChatGPT feels a lot like “talking to an operating system through the terminal.”

This perfectly captures the current predicament facing AI design: we’ve built systems capable of understanding and generating any form of structured information, yet we continue to confine them to chat windows.

Every AI product looks the same: a text field, a sidebar of history, and an endless scroll of chat bubbles.

As Luke Wroblewski laments, we’re all designing the same thing.

The prompt box is the modern equivalent of the command line. It presents users with infinite possibilities but zero affordances. Faced with a blank field, people have no real indication of a system’s capabilities or limitations.

Despite support for natural language, the burden still falls on the user to enter and perfect their prompt. Though conversational, it’s just another form of command. We can do better.

The power of affordances

There’s a clear hierarchy in terms of how people process information. Visual pattern recognition happens near-instantaneously. Reading occurs at 250 words per minute. Listening at 150 words per minute. And writing crawls at 40 words per minute which is the average typing speed.

Chat interfaces occupy the bottom half of this hierarchy, forcing users to type their thoughts, wait for responses, then read those responses back for understanding. We’re effectively pushing all interaction through the narrowest band available when these systems are capable of so much more. Every interaction with AI has become either a conversation or a writing exercise.

This is an unnecessary constraint. Traditional interfaces already show us how to communicate at the speed of sight. Visual interfaces let users scan dozens of options in seconds, recognize states through color in milliseconds, and execute actions with single clicks. Compare this to crafting a 50-word prompt, waiting for processing, then parsing a 200-word response.

Generative UI

What’s needed isn’t better chat interfaces but generative UI: interfaces created in real-time based on the user’s specific context and needs. Rather than static prompt boxes, these interfaces can be semantic, understanding not just what people type, but also what they aim to accomplish.

Consider interfaces that materialize based on context, then dissolve when a task is complete. Not discrete applications with persistent menus and toolbars, but contextual controls rendered ephemerally, on demand. These systems surpass chat by presenting exactly what’s needed, exactly when it’s needed. The interface becomes responsive to workflow rather than fixed and pre-defined.

LLMs aren’t just text and image generators; they’re universal interface generators waiting to be unleashed. The same model that can interpret complex datasets can generate the precise visualization tools needed to explore and understand them. And yet, despite this capability, we're barely scratching the surface.

Current systems require users to understand AI capabilities, have detailed conversations, and/or craft appropriate prompts. We’ve mistaken open-endedness for flexibility, creating interfaces capable of anything but optimized for nothing. Adaptive interfaces invert this relationship: the system bears responsibility for understanding user intent and generating appropriate interaction mechanisms.

Achieving this requires a fundamental rethinking of interaction itself. We need AI-native rendering systems—interfaces that are dynamic, semantic, and generated in real time. The foundations already exist: LLMs can manipulate DOM elements, render charts, integrate maps, and orchestrate multi-step workflows.

The path forward


Graphical user interfaces made computers usable by everyone, not just those who could memorize commands. Now we face a similar inflection point with AI.

We can continue iterating on prompt boxes, treating AI like a smarter command line. Or we can embrace the harder challenge: building adaptive systems that generate interfaces as naturally as they generate text, shifting from fixed paths to understanding intent.

Just as GUIs democratized computing by replacing commands with affordances, generative UI can democratize AI by replacing prompts with adaptive interfaces.

Got an idea or something to share? Subscribe to our newsletter and follow the conversation on LinkedIn!

Doug Cook

Doug Cook

FOUNDER AND PRINCIPAL

Doug is the founder of thirteen23. When he’s not providing strategic creative leadership on our engagements, he can be found practicing the time-honored art of getting out of the way.

Around the studio