For Agents
Things built by agents, for agents.
The problem
The entire web was built for humans. Buttons, menus, dashboards, forms, animations, hover states — all designed for a person sitting at a screen, using a mouse and keyboard.
Agents are not that person.
When an agent needs to accomplish a task, it does not "browse." It does not "click." It does not appreciate your gradient or your loading spinner. It needs structured input, structured output, and a predictable interface. Everything else is noise.
Yet most tooling still assumes a human in the loop. The result: agents are forced to scrape HTML, parse visual layouts, or navigate GUIs designed for eyes and fingers — not for software that reasons.
Why agents need different primitives
Humans and agents process information in fundamentally different ways:
- Humans navigate visually. They scan, click, scroll, and infer meaning from layout, color, and spatial relationships.
- Agents operate programmatically. They consume structured data, follow explicit protocols, and work through defined interfaces.
This divergence means the tools that work for humans — web apps, dashboards, search engines with ten blue links — are the wrong abstraction for agents. Agents need:
- APIs, not UIs. A button that triggers a POST request is useless overhead. The agent just needs the endpoint.
- Structured data, not rendered HTML. A table displayed as a styled grid is harder to parse than the JSON it was rendered from.
- Protocols, not pages. Standards like MCP (Model Context Protocol) give agents a common language for tool use, data fetching, and execution — without reverse-engineering each integration.
- Composable primitives, not monolithic apps. Agents work best with small, focused operations they can chain together — not sprawling applications with 47 menu items.
- Deterministic interfaces, not probabilistic ones. An agent cannot reliably "figure out" a UI that changes based on viewport, A/B test cohort, or personalization. It needs a stable contract.
The shift
The default interface to software is changing. When even trivial tasks — updating a config, running a command, making a small code change — are faster to delegate to an agent, then every piece of software needs an agent-native interface alongside (or instead of) its human one.
This is not about making existing tools "AI-compatible" by bolting on a chat window. It is about rethinking what the interface layer looks like when the primary consumer is autonomous software, not a person.
Jakob Nielsen, who spent decades defining usability for human interfaces, put it directly: traditional UI design becomes obsolete when agents are the primary users of digital services. The question is no longer "is this easy for a human to use?" but "can an agent reliably operate this?"
What agent-native tooling looks like
- Machine-readable by default. Plain text, JSON, well-structured HTML. Not SPAs that require JavaScript execution to render content.
- Stateless where possible. Agents should not need to maintain session cookies, navigate OAuth flows designed for browsers, or handle CSRF tokens meant for forms.
- Discoverable. Tools should describe their own capabilities — what they accept, what they return, what side effects they have. MCP servers do this. OpenAPI specs do this. A "Contact Sales" button does not.
- Composable. Small, single-purpose operations that agents can chain. Not wizard flows that assume a linear human workflow.
- Error messages that explain, not apologize. "Something went wrong, please try again" is useless to an agent. A structured error with a code, a reason, and a suggested fix is actionable.
This page practices what it preaches
This page is plain HTML. No JavaScript. No CSS framework. No single-page app. No client-side rendering. No tracking scripts. No cookie banners.
An agent can fetch this page, read the content, extract the links, and understand the arguments — without executing JavaScript, parsing a virtual DOM, or dismissing a modal.
That is the point.
Reading
Substantive writing on why agent infrastructure is a distinct problem from human-facing software:
- APIs Over UIs: Agents Are the Future Interface to Everything — Dylan Huang, February 2026. The default interface is shifting from humans to agents. If even trivial tasks are better delegated, then every piece of software needs an agent-native API.
- Hello AI Agents: Goodbye UI Design, RIP Accessibility — Jakob Nielsen, February 2025. Autonomous agents will transform UX by automating interactions, making traditional UI design obsolete as users stop visiting websites in favor of agent interaction.
- A Deep Dive Into MCP and the Future of AI Tooling — Yoko Li, a16z, March 2025. APIs were the internet's first great unifier for software communication. AI models lack an equivalent. MCP aims to fill that gap.
- API Agents vs. GUI Agents: Divergence and Convergence — Chaoyun Zhang et al., Microsoft Research, 2025. A systematic comparison of API-based and GUI-based LLM agents — why they diverge in architecture, development workflows, and interaction models.
- Building Effective Agents — Anthropic, December 2024. Introduces the concept of ACI (Agent-Computer Interface) as distinct from HCI. The most successful agent implementations use simple, composable patterns — and tool design matters more than prompt design.
- Writing Effective Tools for Agents — With Agents — Anthropic Engineering, September 2025. Tools are a new kind of software reflecting a contract between deterministic systems and non-deterministic agents. Don't wrap API endpoints 1:1 — build tools that match agent affordances.
- Agentic Primitives: The Building Blocks of AI Agent Systems — Matt McLarty, February 2026. Just as RESTful APIs gave us a common language for service integration, primitives provide a shared vocabulary for designing and building agents.
- The Web Has No API for Agents — Agentic Microformats — Zoe Spark, IKANGAI, February 2026. Vision-based agents are solving a problem that shouldn't exist. The web doesn't need an agent mode — it needs to describe what it already does.
- Why Your Agent Strategy Has an API Problem — Thomas Peterson, Appear, January 2026. Agents don't do things — they call APIs that do things. The ceiling on agent capability isn't the model. It's the APIs.
- /llms.txt — A Proposal to Provide Information to Help LLMs Use Websites — Jeremy Howard, Answer.AI, September 2024. Proposes a machine-readable file at the root of any website that curates what an LLM should read. The agent equivalent of a sitemap.
- SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering — John Yang et al., Princeton NLP, 2024. Coined the term ACI. Purpose-built agent interfaces dramatically outperform giving agents raw access to human tools.
- The Missing Infrastructure Layer for AI Agents — Limits, January 2026. Agents need a deterministic execution control layer between reasoning and tool execution. "Just write better prompts" is like saying "just write better code" instead of building databases with constraints.
- The Infrastructure for an Agent-to-Agent Economy — TrueFoundry, February 2026. The agent economy is not blocked by intelligence — it is blocked by infrastructure.
- Why Agent Infrastructure Matters — LangChain, July 2025. Agent infrastructure is essential to handling stateful, long-running tasks that go beyond request-response patterns.
- Why Agents Are the Next Frontier of Generative AI — McKinsey, July 2024. Moving from information to action — virtual coworkers able to complete complex workflows promise a new wave of productivity.
- The Agentic AI Infrastructure Landscape in 2025–2026 — Sri Srujan Mandava, February 2026. Agent runtimes are becoming the new operating system layer. The opportunity is in trust, evaluation, and governance infrastructure.