Legacy CLIs as the Native Substrate for AI Agents

articleaicliagentsmcpskillsagent-loopsevent-bus

Validates the joelclaw CLI-first architecture: agent loops become more composable when every capability is reachable via terminal contracts and visible in /system/events telemetry.

Andrej Karpathy makes a sharp point in this post: CLI tools are “legacy,” and that’s exactly why they’re agent-ready right now. A terminal gives an AI agent structured input/output, composable commands, and a huge existing ecosystem without inventing a new interface layer.

The clever part is the stack, not just the demo: expose capability through a CLI, make docs exportable to Markdown, add reusable Skills, and optionally expose it via MCP. That turns a product from “human app” into machine-usable infrastructure. His example around Polymarket and a fast Rust path to agent-driven dashboards captures the pattern cleanly.

For joelclaw, this is immediately practical: keep building around command surfaces (joelclaw CLI, GitHub CLI, and system tooling) that agents can chain into larger workflows. If a capability can be called from a terminal and observed in system events, it’s much easier to plug into agent loops without bespoke glue code.

Key Ideas

  • “Legacy” CLI interfaces are often the fastest path to agent integration because they already provide deterministic text I/O and scriptable composition.
  • The “build for agents” checklist in the post is concrete: Markdown docs, Skills, CLI, and MCP.
  • Agent value compounds when tools become modules in bigger pipelines, not one-off assistants; this maps directly to event-driven workflows.
  • Prediction markets are a good stress test because the workload mixes querying, filtering, ranking, and execution from one terminal surface.
  • For joelclaw, the design implication is to keep each capability exposed as a CLI contract plus observability in /system/events.