ADR-0183accepted

ADR Priority Rubric and Daily Ranking

Status: accepted
Date: 2026-03-01
Updated: 2026-03-01
Deciders: Joel Hooks, Panda
Related: ADR-0174 (vault ADR tooling), ADR-0169 (CLI command contracts), ADR-0186 (persisted Q&A + rubric reasoning)

Context

ADR inventory is healthy, but prioritization is ad hoc.

Problems:

  • too many open ADRs (proposed + accepted) competing for attention
  • status alone does not tell us urgency, readiness, or execution risk
  • decisions can stall without a daily ranking pass
  • no explicit confidence/readiness signal for “do now vs de-risk first”

We need a deterministic rubric that agents can run every day.

Decision

Adopt a mandatory NRC+Novelty rubric for open ADRs:

  • Need (0–5)
  • Readiness (0–5)
  • Confidence (0–5)
  • Novelty / Cool Factor (0–5)

Axis definitions

Axis035
Neednice-to-haveuseful soonurgent/high leverage/unblocks core work
Readinessvague/blockedmostly defined, some blockersready now: clear scope + dependencies available
Confidenceunknown path/high riskmixed evidenceproven path/low execution risk
Novelty / Cool Factorcommodity maintenance with little strategic upsidesomewhat interesting or differentiatinghigh signal, strategic leverage, or uniquely worth doing

Priority score

Compute normalized priority score with a novelty adjustment:

base_100 = round(20 * (0.5*Need + 0.3*Readiness + 0.2*Confidence))
score_100 = clamp(base_100 + round((Novelty-3)*5), 0, 100)
  • Need carries the most weight in base score.
  • Novelty is a bounded adjustment, not the primary driver.
  • If novelty is missing, use a neutral default of 3.

Priority bands

  • 80–100do-now
  • 60–79next
  • 40–59de-risk
  • 0–39park

Hard gates

  • if Readiness < 3: do not start implementation work; create unblock plan first
  • if Confidence < 3: require spike/prototype before full implementation

Daily cadence (mandatory)

  • score all open ADRs (proposed, accepted) daily
  • re-score immediately after incidents, major deploys, dependency shifts, or ADR lifecycle events (created, updated, status changed)
  • re-score immediately when answered questions materially change rubric evidence or gate state
  • if band changes, update rationale the same day

ADR frontmatter fields

Open ADRs must include:

priority-need: 0-5
priority-readiness: 0-5
priority-confidence: 0-5
priority-novelty: 0-5           # optional, defaults to 3 when absent
priority-score: 0-100
priority-band: do-now|next|de-risk|park
priority-reviewed: YYYY-MM-DD
priority-rationale: one-line reason

Reasoning capture contract

Each scoring pass must persist reasoning alongside numeric rubric outputs.

Minimum persisted fields (outside ADR frontmatter):

  • source trigger (daily, adr.created, adr.updated, adr.status.changed, question.answered)
  • reasoning summary
  • evidence references used for scoring
  • assumptions and key risks
  • gate evaluation (autoEligible, failures)

ADR frontmatter remains compact (priority-* summary). Rich reasoning/evidence is persisted and indexed per ADR-0186.

Sort contract

Rank ADRs by:

  1. priority-band (do-now > next > de-risk > park)
  2. priority-score (desc)
  3. priority-need (desc)
  4. ADR number (asc, stable tie-break)

Consequences

Good

  • daily ordering is explicit and repeatable
  • “what to do next” becomes machine-readable
  • lower-confidence work is surfaced for de-risking before large bets
  • agents can prioritize without guessing intent

Tradeoffs

  • adds metadata upkeep overhead to open ADRs
  • stale scores become misleading if daily pass is skipped

Implementation sequence (vector clock)

  1. Policy adopted in ADR-0183.
  2. Populate priority fields on all open ADRs.
  3. Add CLI ranking surface (joelclaw vault adr rank) aligned with this rubric.
  4. Integrate ranking into routine health/reporting output.
  5. Add event-triggered rerank on ADR lifecycle changes.
  6. Persist rubric reasoning/evidence snapshots per scoring pass.

Compliance

Any new proposed or accepted ADR without Need/Readiness/Confidence score fields is non-compliant until scored. Novelty is strongly recommended; when absent, tooling assumes neutral 3. Scoring runs without persisted reasoning/evidence context are non-compliant.