Building a Gateway for Your AI Agent
npx skills add joelhooks/joelclaw --skill gateway-setupYour agent session dies when you close the terminal. Every WebSocket disconnects, every event subscription gone, every in-memory state evaporated. The gateway fixes that — a persistent daemon that stays alive, routes events, monitors its own health, and gives you Telegram access from your phone.
Prerequisite: Self-hosted Inngest + Redis.
For agents — install the skill and it walks through the full build adapted to your machine and intent:
npx skills add joelhooks/joelclaw --skill gateway-setup --yes --globalThe skill contains the compressed ADR chain with all tradeoffs — ADR-0018 through ADR-0038 — so your agent has the full reasoning context.
Or curl the setup script:
curl -fsSL https://joelclaw.com/scripts/gateway-setup.sh | bashThe problem
Close the terminal and your agent disappears.
Not because the work stopped. Because your only control plane just evaporated. Background jobs finish. Loops complete. But nothing can tap you on the shoulder and say your shit is done.This burns context budget too. Every interrupt eats tokens from whatever the agent was working on. The gateway solves routing — only the right session gets the message.
I wanted three things: an always-on session that handles system heartbeats, targeted notifications back to whichever session started a task, and phone access when I’m away from the keyboard.
What it is
A Redis event bridge between your background infrastructure (Inngest, cron, webhooks) and your AI agent’s session. Events route to the right session. Failures get detected. Responses go back through the channel that asked.
Inngest functions ──→ Redis ──→ pi extension ──→ agent session
↑
pub/sub notifyThe evolution (4 iterations)
I didn’t design this upfront. Each iteration solved a real problem.
v1: Redis bridge
Inngest functions push events to a Redis list. A pi extension subscribes to a pub/sub channel and drains the list into the session as a user message.Why Redis and not a proper message queue? Because it was already running for caching, the pub/sub semantics are good enough for single-machine fan-out, and adding RabbitMQ or NATS for one consumer felt like architecture theater. ~100 lines of TypeScript.
⚠️ serveHost is mandatory if your Inngest server runs in Docker and your worker runs on the host. Without it, the SDK advertises localhost:3100 as its callback URL — but that’s the container’s loopback, not yours. Every function run fails silently with “Unable to reach SDK URL.”
// In your worker's Hono serve handler:
inngestServe({
client: inngest,
functions,
serveHost: "http://host.docker.internal:3100",
})Then force a re-sync so the server picks up the new URL:
curl -X PUT http://localhost:3100/api/inngestSolved: background jobs can notify the agent.
v2: Multi-session routing
Problem: I run 3-5 pi sessions simultaneously. Heartbeats were interrupting coding sessions.
Solution: one central session (gets all events) + satellite sessions (get only events they started). Sessions register in a Redis set. Events fan out based on originSession tracking.
GATEWAY_ROLE=central pi → gets heartbeats, alerts, everything
pi → gets only its own loop completions, downloadsSolved: context budgets aren’t wasted on irrelevant notifications.
v3: Heartbeat + watchdog
An Inngest cron fires every 15 minutes. The gateway extension tracks when the last heartbeat arrived. If 30 minutes pass with nothing — inject an alarm with triage steps.
Three independent failure detection layers: extension watchdog catches Inngest/worker failures. A launchd tripwire catches pi crashes. The heartbeat prompt itself runs system health checks.The “who watches the watchmen” problem is real. Each layer fails independently: Inngest can crash, the worker can hang, the extension can lose Redis, launchd can restart too aggressively. Three uncorrelated monitors is the minimum.
Solved: “who watches the watchmen” — more than one watcher.
v4: Gateway middleware SDK
Every Inngest function gets gateway.progress(), gateway.notify(), and gateway.alert() injected via middleware. Functions don’t need to know about Redis or routing.
async ({ event, step, gateway }) => {
gateway.progress("Story 3/8 started: implement auth");
// ... do work ...
gateway.notify("loop.complete", { stories: 8, passed: 7 });
}The middleware itself is ~30 lines — it creates a Redis client once, then injects the helpers into every function’s context:
import { InngestMiddleware } from "inngest";
import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL ?? "redis://localhost:6379");
export const gatewayMiddleware = new InngestMiddleware({
name: "gateway",
init() {
return {
onFunctionRun({ fn }) {
return {
transformInput({ ctx }) {
const push = (type: string, payload: Record<string, unknown>) => {
const event = JSON.stringify({
type, payload, fn: fn.id,
ts: Date.now(), origin: ctx.event?.data?.originSession,
});
redis.lpush("joelclaw:events:central", event);
redis.publish("joelclaw:notify:central", "1");
};
return {
ctx: {
...ctx,
gateway: {
progress: (msg: string) => push("progress", { message: msg }),
notify: (topic: string, data?: Record<string, unknown>) =>
push("notify", { topic, ...data }),
alert: (msg: string, data?: Record<string, unknown>) =>
push("alert", { message: msg, ...data }),
},
},
};
},
};
},
};
},
});Register it on your Inngest client: new Inngest({ id: "my-worker", middleware: [gatewayMiddleware] }).
Solved: functions push status updates without coupling to the delivery mechanism.
Gotchas
Function sync has a delay window. Adding functions and restarting the worker isn’t enough — the server won’t see them until the next --poll-interval cycle (30s in our config) or a manual sync:
curl -X PUT http://localhost:3100/api/inngestThe heartbeat cron was registered by the SDK but invisible to the server for the first minute. If your function isn’t triggering, check the Functions tab in the dashboard — if it’s not listed, sync hasn’t happened yet.
ioredis resolution is flaky in Bun. Bun occasionally can’t resolve @ioredis/commands from within ioredis. Fix: explicitly install the sub-dependency, or nuke and reinstall:
bun add @ioredis/commands
# or the nuclear option:
rm -rf node_modules && bun installTwo ioredis clients for pub/sub. A subscribed Redis client can’t run commands like LRANGE or DEL. You need one client for subscriptions and a separate one for reads/writes. This isn’t an Inngest gotcha — it’s a Redis protocol constraint that bites everyone once.
What’s next: embedded daemon + Telegram
The extension gets me far. The next step is a standalone daemon that embeds pi as a library — no terminal needed.This is done now. The daemon runs via launchd with KeepAlive: true. joelclaw gateway restart rolls the session cleanly. Telegram as the first external channel. WebSocket for remote TUI. All inputs serialize through one command queue into one session.
Talk to the agent from your phone. Get responses back in the same thread.
The full stack
┌─────────────────┐
│ Inngest server │ cron heartbeat, durable functions
│ (k8s/Docker) │ every 15 min + event-driven
└────────┬────────┘
│ step.run → pushGatewayEvent()
▼
┌─────────────────┐
│ Redis │ event lists, pub/sub, session registry
│ (k8s/Docker) │ joelclaw:events:*, joelclaw:notify:*
└────────┬────────┘
│ subscribe + drain
▼
┌─────────────────┐
│ pi extension │ central/satellite routing
│ (gateway) │ watchdog, dedup, prompt injection
└────────┬────────┘
│ sendUserMessage()
▼
┌─────────────────┐
│ pi session │ LLM conversation
│ (agent) │ tools, memory, skills
└─────────────────┘For humans
The deeper architecture narrative is in Inngest is the Nervous System. For the k8s foundation: The One Where Joel Deploys Kubernetes… Again.
This is a living document. Updated as the system evolves.