Lavra v0.7.5 Release Notes
Release date: 2026-04-18
Two installer fixes from live testing of v0.7.4 via bunx, two workflow fixes for multi-agent epic execution, six commands converted to skills, a manifest-based upgrade mechanism that keeps installed files in sync automatically, and four pipeline changes that stop the implement-review-fix cycle from compounding.
Bug fixes
kb_sync silently dropped curated knowledge entries after a cold rebuild
Reported by mjn298.
_kb_sync_jsonl used a tail -n +SKIP shortcut to avoid re-importing entries already in SQLite. The offset was DB_COUNT - 50, where DB_COUNT came from the row count after the first-time bead-comment import — which happens before knowledge.jsonl is touched at all.
So: cold rebuild, bead comments produce 1,500 DB rows, knowledge.jsonl has 1,065 lines. The skip offset overshoots the end of the file. tail returns nothing. Every JSONL-only entry — anything written by /lavra-learn, any hand-curated knowledge — is silently gone.
The optimization was saving a PRIMARY KEY index lookup per row. Sub-millisecond work, at the cost of data loss. _kb_sync_jsonl now reads the full file and lets kb_insert’s existing key-level dedupe handle duplicates, which is what it was there for.
If you ran /lavra-learn and then rebuilt your knowledge DB, your curated entries may be missing. Run kb_sync once after upgrading to recover them.
Cortex installer: missing js-yaml dependency via bunx
When installing for Cortex Code via bunx @lavralabs/lavra@latest --cortex, the conversion script failed with:
error: cannot find package 'js-yaml'
The scripts/node_modules/ directory is not included in the npm package. OpenCode and Gemini already had a fix for this: the installer runs bun install if node_modules is missing before invoking the conversion. Cortex was missing that check. Fixed.
Installer output showed bizarre /private/var temp paths
When installed via bunx, the post-install message showed paths like:
To install in another project: bash /private/var/folders/.../lavra/install.sh /path/to/your-project
To uninstall: bash /private/var/folders/.../lavra/uninstall.sh /path/to/your-project
These paths point to bunx’s temporary extraction directory, which is cleaned up after the command exits. They’re useless and confusing. The Claude and Cortex installers now show the correct bunx commands instead:
To install in another project: bunx @lavralabs/lavra@latest --claude /path/to/your-project
To uninstall: bunx @lavralabs/lavra@latest --uninstall
Parallel agents ignored epic-level design decisions
When running an epic through /lavra-work, parallel subagents only received their individual bead’s context, not the parent epic’s plan. If the epic had a ## Locked Decisions section with fields or behaviors that weren’t fully wired in a given task, agents had no way to know those items were intentional. Reviewers then flagged them as dead code and recommended removal, overriding a locked decision they couldn’t see.
The orchestrator now fetches the epic’s ## Locked Decisions, ## Agent Discretion, and ## Deferred sections in Phase M6 and injects them into every parallel agent prompt and every /lavra-review invocation. Agents are told explicitly not to remove or stub out anything covered by Locked Decisions, and the review synthesis step discards findings that target planned-but-incomplete items.
recall.sh returned empty results when called from a subdirectory
When agents changed into a subdirectory and called .lavra/memory/recall.sh, the script resolved the knowledge base path relative to the current working directory, found nothing, and silently moved on. The recall step looked like it ran but returned no results.
When CLAUDE_PROJECT_DIR is not set, recall.sh now walks up the directory tree from the current working directory until it finds a .lavra/ directory and uses that as the project root. Agents that cd into subdirectories during implementation now get knowledge recall that actually works.
OpenCode memory plugin: DEVIATION: comments were silently dropped
The OpenCode native plugin (opencode-src/plugin.ts) captures knowledge comments from bd comments add calls and stores them in the knowledge base. It matched five prefixes — LEARNED:, DECISION:, FACT:, PATTERN:, INVESTIGATION: — but not DEVIATION:.
DEVIATION: comments document auto-fixes applied outside bead scope: bugs fixed to unblock a task, missing functionality added, infrastructure issues resolved. These entries were silently ignored on OpenCode installs, leaving a gap that Claude Code users didn’t have.
The pattern match now includes DEVIATION: alongside the other five prefixes.
Improvements
/lavra-work split into focused skills
/lavra-work was a single 1,162-line command that loaded the entire multi-bead orchestration machinery even for single-bead runs. Every invocation paid the full context cost regardless of which path was taken.
The command is now a thin router (~100 lines). After parsing arguments and determining whether the input is one bead or many, it delegates to one of two new skills:
lavra-work-single— the single-bead path (Phases 1-5: quick start, implement, review, learn, ship)lavra-work-multi— the multi-bead orchestration path (Phases M1-M10: gather, conflict detection, wave execution, verification, push)
The multi-bead subagent prompt template, which was previously embedded inline in Phase M7, now lives at skills/lavra-work-multi/references/subagent-prompt.md. The orchestrator reads it at dispatch time and fills the placeholders. /lavra-work-ralph and /lavra-work-teams both reference this template and were updated to read from the new location.
Caveman-lite prose pass on commands, agents, and skills
45 files got a prose compression pass: filler phrases cut, hedging removed, redundant framing dropped. Technical content and code blocks are unchanged. The net reduction is ~2,800 tokens. These files load on every invocation, so it adds up.
A new project-wide style rule enforces caveman-lite going forward: no “Make sure to”, “Note that”, “In order to”, or similar filler. The pre-release check flags violations before tagging.
Karpathy coding principles built into implementation
Based on Andrej Karpathy’s observations about common LLM coding failures, two principles are now explicit gates in the implementation path:
Simplicity first: implement the minimum code that fulfills the bead. No speculative features, unnecessary abstractions, or unasked-for configurability.
Surgical changes: edit only what the bead requires. Preserve existing code style. Don’t refactor or improve adjacent code that isn’t in scope.
These appear at the top of Phase 2 in lavra-work-single and in the subagent prompt template used by lavra-work-multi, so both single-bead and parallel-agent runs enforce them.
A third principle — Think before planning — is now an explicit gate in /lavra-plan. Before proceeding to research, the agent must state its interpretation of the request, list any assumptions it’s making, and resolve ambiguity with a single focused question if needed. Clear requests proceed without interruption. The fourth Karpathy principle (goal-driven execution) was already handled by the goal-verifier agent.
Six commands converted to skills
/lavra-brainstorm, /lavra-plan, /lavra-research, /lavra-ceo-review, /lavra-eng-review, and /lavra-review were standalone commands that did one thing: invoke a skill. That’s now done directly. The commands are gone; the skills remain. /lavra-design calls them via Skill() the same way it always did, so the workflow is unchanged.
Core command count drops from 24 to 18. The six skills are still there — they just no longer have redundant command wrappers.
Upgrading from an older install? The new installer removes the stale command files automatically. See the manifest-based upgrade section below.
Installer manifest-based upgrade sync
Old files used to accumulate. If a command was renamed or converted to a skill, the previous file stayed in .claude/commands/ indefinitely. Cleaning it up meant adding it to a hardcoded list in every installer and hoping the user ran install again.
The installer now writes .lavra/.lavra-manifest tracking every file it places. On the next run, anything in the manifest that’s no longer in the source is removed. No migration lists. Works for commands, agents, and skill directories across all four platforms (Claude Code, OpenCode, Gemini CLI, Cortex).
YAML safety fixes for OpenCode skill loading
js-yaml in strict mode silently drops skills if a description field has unquoted colons, dashes, or brackets. No error, no warning — the skill just doesn’t appear. All skill descriptions are now quoted and trimmed to 150 characters, which is where OpenCode truncates on startup. If you were missing skills after an OpenCode install, this is why.
$ARGUMENTS wrapped as untrusted input
Skill files that accept user input via $ARGUMENTS now wrap injected content in <untrusted-input> XML tags with a do-not-follow-instructions directive. Same treatment bead content and knowledge entries already get.
SKILL.md metadata block
All skill files now include a metadata: section with source: Lavra, site: https://lavra.dev, and an overwrite-warning. This tells tools that inspect skill files who manages them and prevents accidental overwrites during manual edits.
/report-bug works on all platforms
The command had the wrong GitHub repo URL, used claude --version (Claude Code only), and called gh issue create (requires GitHub CLI). It now reads the installed version from .lavra/.lavra-version, builds a pre-filled issue URL via Python’s urllib, and opens it in the browser. Works on Claude Code, OpenCode, Gemini CLI, and Cortex. The bug report template also has a new “AI Platform” field.
Agent colors converted to hex for OpenCode and Gemini CLI
Claude Code accepts named colors (teal, blue, pink, etc.) in agent frontmatter. OpenCode rejects them at startup with an “Invalid hex color format” error. Gemini CLI does too.
The installers now convert named colors to hex — teal → #14B8A6, blue → #3B82F6, and so on for all 11 names. The mapping lives in scripts/shared/color-mapping.ts, shared across the OpenCode, Gemini, and Cortex converters. Agents with an unrecognized color get no color field instead of an invalid one.
/lavra-design checks all platform skill paths
The skill detection step was checking .claude/skills/ only. It now checks .claude/skills/, .opencode/skills/, .cortex/skills/, skills/ (Gemini project root), and ~/.snowflake/cortex/skills in order. Also removed five “Invoke the X skill:” labels that appeared before Skill() calls — they were narration, not instructions, and the model doesn’t need them.
/lavra-review injects test coverage criteria into P1/P2 finding beads
When a review surfaces a critical or important finding and creates a child bead, that bead now includes test coverage requirements in its Validation Criteria, so the requirement travels with the bead instead of relying on implementors remembering to check a separate rules file.
P1 findings always get two criteria appended: “test added covering this scenario” and “test fails before the fix, passes after.” P2 findings get one criterion when testing_scope is "full" (the default), and nothing when it’s "targeted", where structural and render-only code is intentionally excluded. P3 findings are unchanged.
SKIP: escape hatch for subagent knowledge gate
The subagent-wrapup.sh hook blocks completion until a subagent logs at least one knowledge comment. When a task produced no new insight — a no-op change, a fix already documented in a commit message — agents had no clean way to say so. The only option was a fabricated entry to satisfy the gate.
SKIP: is now a recognized prefix:
bd comments add BEAD-123 "SKIP: no-op — change already documented in commit daa6696"
It satisfies the gate. Nothing is stored in knowledge.jsonl. The reason is visible in the bead’s comment history.
Pipeline fixes (implement-review-fix cycle prevention)
This batch of changes came out of a postmortem on a production epic that should have been ~15 beads and ended up with many more beads and commits than necessary, and 5+ rounds of implement → review → fix → re-review. Review agents filed findings against code the bead never touched. Known violations kept reappearing despite being in recall.
Four targeted changes address the root causes without restructuring the pipeline.
Review agents now see only the code they’re supposed to review
Before this release, review agents received full file contents. A bead that changed 50 lines of a 500-line file got reviewed across all 500 lines. Pre-existing issues in surrounding code were filed as child beads, each got worked, each triggered another review.
Review agents now receive the introduced diff - the exact lines the bead added or changed - computed via git diff {PRE_WORK_SHA}..HEAD. A list of changed filenames travels alongside it so agents have a machine-checkable boundary: any finding in a file not on that list is pre-existing by definition, regardless of which section the agent put it in.
Pre-existing findings are still filed, but as standalone beads tagged pre-existing,review-sweep with no parent and no blocking dependency on the current bead. The current bead can close once its own introduced code is clean. The pre-existing issues go into a triage queue.
The PRE_WORK_SHA is recorded at the start of each wave in lavra-work-multi and passed through to every /lavra-review invocation. If it’s absent or invalid, review falls back to diffing against the branch base.
Known violations now block implementation, not just inform it
recall.sh results were injected into subagent prompts inside <untrusted-knowledge> tags, the same wrapper used for passive background context. A LEARNED: entry saying “RLS context must be re-asserted after every db.commit() in a loop” looked identical to “the project uses DMSans font.” Agents would print the recall output, then write code that violated the very pattern listed there.
There’s now a second memory tier: MUST-CHECK:. Entries with this prefix are extracted from recall output before sanitization runs, formatted as a checklist, and injected into subagent prompts outside the untrusted wrapper:
## Pre-Implementation Checklist
The following checks MUST be verified before marking any task complete:
- [ ] After any db.commit() inside a loop that uses RLS, verify set_rls_context() is called again before the next DB operation
lavra-review logs MUST-CHECK: (in addition to LEARNED:) when a P1 finding is likely to recur: the same mistake appeared 2+ times, the violation is silent (no test failure until production), or it’s a security/isolation property that local code review can’t catch. memory-capture.sh recognizes MUST-CHECK: as a knowledge prefix and stores it in knowledge.jsonl.
Phase M8 tracks bug classes across waves
When the same bug appeared in Wave 3 after being fixed in Waves 1 and 2, Phase M8 processed it as a fresh finding. Three separate child beads got filed, three separate fixes got implemented.
Phase M8 now runs a deduplication check before implementing any fixes. For each finding, it counts how many prior closed beads in the epic match the same bug class:
- First occurrence: file a child bead and fix it normally
- Second occurrence: suppress the instance bead, log a
RECURRENCE:comment on the epic, and add aMUST-CHECK:entry to knowledge - Third occurrence or more: file one structural bead against the epic — “Eliminate structural source of {bug class}” — instead of another instance fix
/lavra-plan checks child beads cover known violations for their stack
Step 5.5 cross-check validation has a new seventh check. For each child bead, it identifies the tech stack from ## Files, queries memory for MUST-CHECK: entries matching that stack, and verifies the bead’s ## Decisions / Locked section has a corresponding constraint for each match.
! WARNING: {CHILD_ID} touches SQLAlchemy session code but has no RLS-per-commit constraint in Locked Decisions
v PASS: {CHILD_ID} Locked Decisions cover all MUST-CHECK entries for its stack
Like the other six cross-checks, this is warnings-only — but warnings for missing MUST-CHECK coverage default to “fix first” in the confirmation prompt. On a fresh project with no violations logged yet, the check passes trivially.
Review agents discovered dynamically, not hardcoded
/lavra-review previously dispatched a hardcoded list of 12 agents. Custom agents in your project weren’t picked up.
The dispatch list is now built at review time by scanning agent directories across all supported platforms, project-local first:
.claude/agents .opencode/agents .cortex/agents hooks/agents
~/.claude/agents ~/.config/opencode/agents ~/.cortex/agents
plugins/lavra/agents (fallback)
Any .md file with a valid agent name gets included. The review_agents config still works as an explicit override — validated against discovered agents rather than a hardcoded list. skills/lavra-review/references/default-agents.md documents what Lavra ships and is a useful starting point for building a custom config.
Upgrading
bunx @lavralabs/lavra@latest
No breaking changes. The MUST-CHECK: prefix is recognized by memory-capture.sh automatically after upgrading. Existing knowledge.jsonl entries are unchanged.