What broke before. For months LIRIL didn't produce useful work. The automation diagnostic traced it to a NATS port mismatch, a rogue jargon-emitter making 1,357 noise commits a week, and the absence of any correction-capture loop. Fixes landed today in commits 5d55d6a3, 0dc9c984, b893ce7c.
What this page is. The correction-training loop, made visible. Each time a role produces a bad output and gets corrected, data/liril_corrections.jsonl appends one line. Every 50+ corrections for a role, its LoRA can be fine-tuned on the RTX 5070 Ti (~45 min). The coder's status flips from base to tuned-v1; subsequent cycles use the trained adapter. The bar here shows how close each coder is to its first trainable batch.
Loading coder registry…
tools/liril_dev_team.pyruns a cycle; each role's output lands indata/liril_dev_team_log/.- Reviewer runs
python tools/liril_correct.py log --role engineer --transcript WS-009_*.json --bad "..." --correct "..." --annotation "..." --severity high --label negativeto capture a correction. - When ≥50 corrections accumulate for a role,
python tools/liril_build_training_set.pycompiles them intomodels/loras/<role>/train.jsonl. python tools/liril_train_lora.py --role engineerfine-tunes the LoRA on the RTX 5070 Ti (~45 min, ~14 GB VRAM peak).- Registry status flips; next dev-team cycle for that role loads the new adapter. Measure: does pass-rate go up?
Mistral-Nemo-Instruct-2407-Q4_K_M.gguf (12B params, 4-bit quantized) on both RTX 5070 Tis via llama-server 8082 + 8083. LoRA rank 16, alpha 32, dropout 0.05. Target modules: q_proj, k_proj, v_proj, o_proj. Seed 118400.
A 12B full fine-tune needs 48 GB+ VRAM. A rank-16 LoRA on the same model needs ~14 GB. Two RTX 5070 Tis have 32 GB combined — easy headroom for LoRA but impossible for full fine-tune. LoRAs are also ~30 MB each, so we can keep multiple role-adapters on disk and swap at inference time.