JoelClawMy Bespoke OpenClaw-inspired Mac Mini

Playing with AT Protocol as a Data Layer

Most personal AI projects are a coding harness like Claude Code and some markdown files. Maybe a folder of notes for context. That’s honestly enough for a lot of people, and it works.

I wanted to go deeper — not because I needed to, but because I got curious about what happens when you give an AI system a real data layer. What if there was a protocol that handled identity, typed data, and real-time events?

Turns out there is. It’s just… designed for social networking.

Why AT Protocol?

AT Protocol is what Bluesky runs on. It wasn’t built for agent systems. But the primitives are weirdly relevant:

  • PDS (Personal Data Server) — each entity gets its own data store
  • DID identity — cryptographic identity that works for humans and agents
  • Lexicons — typed schemas for every record type
  • Firehose — real-time stream of all changes
  • Federation — servers talk to each other by design

The thing that clicked for me: the PDS is the database. You don’t build an API in front of it. You don’t bolt on auth. It already has identity, schema enforcement, and a real-time event stream baked in.

Is this the right tool for the job? Honestly, I’m not sure yet. There are simpler approaches — mTLS with JWTs, ActivityPub, just running everything on localhost. But I’m curious whether a protocol designed for trust relationships between entities on a network could work for trust relationships between agents and humans. That feels worth exploring.

What I’m storing

I defined custom Lexicons under dev.joelclaw.*:

dev.joelclaw.agent.*    — messages, threads, tool calls
dev.joelclaw.memory.*   — sessions, playbook, timeline, soul
dev.joelclaw.system.*   — events, logs, health, config
dev.joelclaw.family.*   — lists, reminders, shared context
dev.joelclaw.loop.*     — coding iterations, PRD state

Every record is typed, stored on the PDS, and accessible via XRPC. The agent runtime (Inngest functions) subscribes to the firehose, processes events, and writes results back.

Is it overkill? Almost certainly. But I’ve been burned enough times by “just throw it in a database and figure out the schema later.” And honestly, the Lexicon system is fun to work with. Defining schemas for your own data feels like building with Lego. 😅

Two halves of truth

The PDS holds agent data — structured, typed, federated. But there’s a second half: the Obsidian Vault holds human knowledge — prose, wikilinks, architecture decisions, project notes.

Neither replaces the other. Qdrant indexes both. The agent reads from both. But they serve different purposes:

  • PDS: machine-written, machine-read, structured, versioned
  • Vault: human-written, human-read, narrative, browseable

I need both. The agent needs both. Whether the PDS is the right tool for the machine half is part of the experiment.

The trust thing

Here’s what actually interests me about AT Protocol for this: trust and identity in a mixed network.

When you have agents and humans interacting on the same system, you need to know who is doing what and whether they’re allowed to. DIDs give every entity — human or agent — the same identity primitives. Lexicons define what each entity can read, write, and act on. The protocol already has a model for “this entity is trusted to do these things.”

That’s interesting for families too. Not as a product roadmap — I’m nowhere near that — but as a thought experiment. What if each family member had their own PDS with their own agent, and trust relationships defined what each agent could see and do? The protocol has the primitives for that built in. Whether it’s the right way to do it… I’ll find out.

Honest tradeoffs

The ecosystem is immature for this use case. The Swift AT Protocol client libraries are thin. Running multiple PDS instances on one Mac Mini is going to be a pain in the ass. I might end up ripping all of this out and using something simpler.

But that’s fine. This is a learning project. If I spend three months with AT Protocol and decide it’s the wrong tool, I’ll have learned a ton about federated identity, typed record systems, and real-time event streams. That knowledge transfers no matter what I end up using.


Previous: Why I’m building my own AI system (for fun). Next: the 4-layer memory architecture.