essay

Inngest is the Nervous System

· updated
inngestarchitecturedurable-functions

Every interesting thing my system does is an Inngest function.

Download a YouTube video, transcribe it locally with Whisper, write a vault note, enrich it with web research, update the daily log — that’s five steps across three tools, and any of them can fail. Before Inngest, a failure in step three meant running the whole thing again. Now each step retries independently. The machine reboots, the job picks up where it left off.

That’s the whole pitch. Step-level durability for TypeScript functions, self-hosted in a single Docker container. Everything else is details.

The stack

Events (HTTP POST or inngest.send())

┌──────────────────┐     ┌────────────────────┐
│  Inngest Server   │────▶│  system-bus worker  │
│  (Docker)         │     │  (Bun + Hono)       │
│  localhost:8288   │     │  localhost:3111      │
│                   │     │                      │
│  Event API        │     │  10 functions:       │
│  Dashboard UI     │     │  - video-download    │
│  Queue + State    │     │  - transcript-process│
│  SQLite persist   │     │  - content-summarize │
└──────────────────┘     │  - system-logger     │
                          │  - agent-loop-plan   │
                          │  - agent-loop-impl   │
                          │  - agent-loop-review │
                          │  - agent-loop-judge  │
                          │  - agent-loop-complete│
                          │  - agent-loop-retro  │
                          └────────────────────┘

Inngest server runs as a k8s StatefulSet in a k3d cluster with persistent volume claims. The worker is a Bun + Hono app managed by launchd with KeepAlive: true. Both survive reboots. Caddy terminates TLS via Tailscale certs so I can hit the dashboard from my phone.

Fourteen functions. Two pipelines. One event bus.

Pipeline 1: Video ingest

This is the one that made it all click for me. Send a YouTube URL, get back a fully enriched vault note with executive summary, key points, speaker context, quotes, and timestamped transcript. The whole chain:

pipeline/video.download

    ├─ Step 1: yt-dlp downloads video + metadata
    ├─ Step 2: scp to NAS (70TB Asustor)
    ├─ Step 3: slog write (structured log)
    └─ Step 4: Emit two events ──┐

    ┌─────────────────────────────┘


pipeline/transcript.process

    ├─ Step 1: mlx-whisper on Apple Silicon (local, no API)
    ├─ Step 2: Create vault note with frontmatter + transcript
    ├─ Step 3: Append to daily note
    ├─ Step 4: slog write
    └─ Step 5: Emit content/summarize ──┐

    ┌────────────────────────────────────┘


content/summarize

    ├─ Step 1: Read title from vault note
    ├─ Step 2: Run pi with full tool access (web search, edit)
    └─ Step 3: slog write + emit content/summarized

Three functions, chained by events. Each one is independently retryable. The transcript step has concurrency: { limit: 1 } because Whisper saturates the GPU — I don’t want two transcriptions fighting for VRAM.

The summarize function is the wild one. It spawns a full pi session that reads the vault note, searches the web for the speaker, finds their profiles and related work, and rewrites the executive summary in my voice. It uses the joel-writing-style skill to calibrate tone. The result is a note that reads like I wrote it after spending an hour researching — except it took three minutes.

The claim-check pattern

One thing I hit immediately: Inngest step outputs have a size limit. A 3-hour transcript is easily 1MB+ of JSON. You can’t pass that between steps.

The fix is what Inngest calls the claim-check pattern. The transcribe step writes the full transcript to a temp file and returns only the file path. The next step reads from that path. State stays small, data stays accessible.

// Step 1: transcribe — returns only the path
const transcriptPath = await step.run("transcribe", async () => {
  // ... run mlx-whisper, write cleaned JSON to outFile
  return outFile; // just the path, not the data
});
 
// Step 2: create vault note — reads from path
await step.run("create-vault-note", async () => {
  const transcript = await Bun.file(transcriptPath).json();
  // ... build the note using transcript data
});

Small thing, but it’s the kind of gotcha that would’ve cost me hours without the dashboard showing me exactly where the step failed and why.

Pipeline 2: Autonomous coding loops

This is the ambitious one. A durable 4-role pipeline that takes a PRD, executes stories one at a time with AI coding agents, and produces committed code. Each role is its own Inngest function — independently retryable, independently traceable.

agent/loop.start


PLANNER (agent-loop-plan)
    │  Reads PRD, finds next story, selects tool

IMPLEMENTOR (agent-loop-implement)
    │  Writes code using codex/claude/pi, commits

REVIEWER (agent-loop-review)
    │  Writes tests from acceptance criteria (NOT from implementation)
    │  Runs typecheck + lint + tests

JUDGE (agent-loop-judge)
    │  PASS → next story | FAIL → back to implementor with feedback

COMPLETE (agent-loop-complete) → RETRO (agent-loop-retro)

The key architectural decisions:

Each role is a separate Inngest function run. Not steps within one function — separate runs. This means each one has its own retry policy, its own timeout, its own trace in the dashboard. When the implementor times out after 15 minutes on a hard story, it retries without re-running the planner.

The reviewer writes tests independently. This is from AgentCoder research — when the same agent writes code and tests, the tests are biased toward the implementation. Having a separate reviewer write tests from the acceptance criteria text catches real bugs.

Smart tool dispatch. Not every story needs the same tool. The planner examines the story and picks:

Story signalsToolWhy
Pure code, migrations, type changescodexFast, focused
UI, needs browser verificationpiHas agent-browser
Needs web researchpiHas web search
Complex multi-file refactorclaudeLargest context

Event-chained, not orchestrated. There’s no master process holding state. Each function emits an event that triggers the next. State lives in the events themselves plus git (commits) and the filesystem (progress.txt). If the whole worker crashes mid-loop, Inngest replays from the last completed step when it comes back up.

The concurrency contract

concurrency: {
  key: "agent-loop/{{ event.data.project }}",
  limit: 1,
}

One loop per project at a time. No parallel mutations to the same repo. The key is the project path, so I could theoretically run loops on two different projects simultaneously — but I haven’t tested that yet.

Cancellation

Every role function checks for cancellation at entry. igs loop cancel <loopId> writes a cancel flag, kills the subprocess, and the next function in the chain sees the flag and stops. No orphan processes.

The event schema

Ten event types power everything. Here’s the full map:

EventWhat triggers itWhat happens
pipeline/video.downloadigs send or curlDownload + NAS transfer
pipeline/video.downloadedvideo-download functionLogs to system-logger
pipeline/transcript.processvideo-download completionWhisper → vault note
pipeline/transcript.processedtranscript-process functionLogs to system-logger
content/summarizetranscript completionpi enriches vault note
content/summarizedsummarize functionLogs to system-logger
system/logAnythingAppends to system-log.jsonl
agent/loop.startigs loop startKicks off coding loop
agent/loop.planPlanner emits when ready for next storyStory selection + tool dispatch
agent/loop.completeAll stories done or max iterationsSummary + retrospective

Every event is typed in TypeScript. The client file is ~200 lines of type definitions. If you send a malformed event, TypeScript catches it at compile time, not at runtime.

The system logger

One function listens to everything:

export const systemLogger = inngest.createFunction(
  { id: "system-logger" },
  [
    { event: "pipeline/video.downloaded" },
    { event: "pipeline/transcript.processed" },
    { event: "content/summarized" },
    { event: "pipeline/book.downloaded" },
    { event: "system/log" },
  ],
  async ({ event }) => {
    // Normalize and append to system-log.jsonl
  }
);

Multi-trigger. Every pipeline completion event gets logged in the same canonical format that slog uses. The system log becomes a unified activity stream — I can slog tail and see video downloads, transcriptions, coding loop iterations, and manual log entries all in one place.

Why not just cron + scripts

I tried that. For about a week. Here’s what broke:

  • A 3-hour video download failed at the transcription step. Had to re-download.
  • Two transcriptions ran simultaneously and OOM’d the Mac Mini.
  • A script crashed overnight and I didn’t know until the next morning.
  • I couldn’t tell which step failed without reading log files.

Inngest solves all of these with retry, concurrency limits, the dashboard, and step-level durability. The overhead is one Docker container and a Bun process. That’s it.

What’s running today

The system log tells the real story. Here’s what flowed through Inngest in the last 24 hours:

  • Downloaded and transcribed 3 YouTube videos (including the 3-hour Lex Fridman episode)
  • Enriched each with web research and wrote vault notes in my voice
  • Ran a full 9-story coding loop that built the agent loop system itself (yes, it built itself)
  • Ran a second loop for v2 improvements — Docker isolation, duration tracking, branch management
  • Logged 15+ structured events to the system log

All durable. All retryable. All traceable in the dashboard at localhost:8288.

What’s next

The retrospective function is the one I’m most excited about. After every coding loop completes, agent-loop-retro fires and does a post-mortem: what worked, what didn’t, which tools performed best, what codebase patterns were discovered. That output feeds into the memory system — specifically the playbook layer that helps the planner make better tool selections next time.

The loop gets better at its job by reflecting on its own work. Which is kind of the whole point of building this thing.


Give your agent the skills

Inngest published official agent skills — six skills covering setup, events, steps, flow control, middleware, and durable functions. Install them and your agent knows how to build all of this without you explaining the patterns:

npx skills add inngest/inngest-skills --yes --global

Every pattern in this post — step-level durability, event chaining, concurrency keys, claim-check, middleware — is covered. The skills work with Claude Code, pi, Cursor, and anything that supports the skills spec.


This is part of a series about building a personal AI system. Previous: Playing with AT Protocol as a Data Layer.