Rule Engines That Keep Humans in the Loop at Scale
This is a direct reference architecture for [joelclaw](https://joelclaw.com/system) because it separates automatic action from human review and keeps a queryable trail of operator verdicts before escalated enforcement.
Most event pipelines die not on volume, but on ambiguity. Osprey, built by ROOST, is explicit about that: automate the obvious and investigate what your models can’t decide confidently. That framing is simple, and it’s why this project feels different from the usual “one more rules engine” repos.
The architecture is intentionally practical: a Rust coordinator and a Python worker layer, with decisions driven by rule logic that can be extended through UDFs and persisted state via an optional labels backend (the sample uses PostgreSQL). The design keeps the hot path fast while still letting operators query outcomes, actions, and past decisions in a way that supports both incident response and auditability.
Given how joelclaw already treats events as first-class system objects, this reads like a usable safety layer pattern. You get the same shape of problem—streaming ambiguous behavior, possible enforcement, and rollback risk—and Osprey shows a route where humans can keep control without bottlenecking the whole stream.
Key Ideas
- Automate the obvious, inspect the ambiguous: Osprey’s core bet is a split between deterministic policy execution and operator-led investigation, which is a sane pattern for safety-critical systems.
- Language + plugin model: Rule logic is extendable through custom functions instead of hard-coding every hypothesis.
- Stateful decisions at scale: The labels service model supports cross-event continuity so actions can reference prior history, not just single-event snapshots.
- Dual-language implementation: The
osprey_coordinator(Rust) +osprey_worker(Python) split is a performance + flexibility compromise that other rule systems often avoid. - Built for operational review: The project explicitly values UI-driven investigation and effect testing, not just batch simulation.
- Open collaboration model: ROOST and Discord using a production safety problem as the core open-source example lowers the usual “this is a research toy” risk.
Links
- Source: https://github.com/roostorg/osprey
- Organization: https://roost.tools
- Upstream context: Discord
- Collaborator: internet.dev
- Example docs: https://github.com/roostorg/osprey/tree/main/docs
- UI reference: https://github.com/roostorg/osprey/blob/main/docs/images/query-and-charts.png
- Related joelclaw surface: joelclaw events
- Internal follow-up spot: /adrs/rule-engine-review.md
- Related discovery hub: /cool/rule-engines