Why this document exists
A 19-axis investigation with 200+ content pages fails if readers can't find the argument. Most accountability sites are flat navigation + dated-list landing pages — that works for ~50 pages, breaks at 200+. This document records the specific IA decisions TENET5 made to keep the investigation navigable, and why. It's also a template: any other investigator building a large evidence base can apply the same pillars.
Curated reading path: shock → finding → structural → action
A first-time visitor gets 11 pages in 4 phases in ~45 minutes via reading-path.html, not a blank index. Ben Shneiderman's "overview first, zoom and filter, details on demand" (Visual Information Seeking Mantra, 1996) applied: the 4 phases are the overview; each phase's 2-3 pages zoom in; the per-page primary-source citations are details-on-demand.
Phase 1 · Shock
Concrete evidence. Visceral entry.
Phase 2 · Finding
The cross-axis pattern.
Phase 3 · Structural
Why the system misses it.
Phase 4 · Action
What citizens can do.
axes-index.html always allow the
power-user jump.
TL;DR at top, primary sources at bottom
Every analysis page starts with a 2-3 sentence summary, then the structural finding, then per-source detail, then the receipts. Readers who only have 30 seconds get the thesis; readers who have 10 minutes get the evidence chain; readers who want to verify get the SHA-256 Merkle anchors.
Every claim has a source. Every source has a hash. Every hash is committed.
IA for evidence-based journalism requires provenance to be structurally inseparable from the claim. On TENET5:
- Every axis has a committed
data/{axis}_grover_decisionmakers.json. - Every page cites primary sources inline with named documents.
- Every computation is reproducible from committed data + published Merkle receipts.
- Every LIRIL-authored artifact carries the consultation record (prompt + reply + elapsed ms + timestamp).
- Every correction (including caught hallucinations) gets a
hallucination_correctionmetadata field on the fixed object.
Automated reading sequence across the PAGE_SEQUENCE
193 curated pages in PAGE_SEQUENCE + 5 presentation phases per page. When a reader starts the walkthrough on any page, the engine:
- Narrates every
[data-narrate]block in order (mean ~8 per page). - On page-end, sets
sessionStorage.liril_autopilot = true. - Calls
window.__TENET5_NEXT_PAGE()to advance. - Next page auto-starts narration on arrival (autopilot flag persists).
- At PAGE_SEQUENCE end, shows "Tour complete" overlay.
A key
persisted in localStorage.tenet5_wt_autoplay). Chain
advances even without autopilot via
liril-walkthrough.js's own
advanceToNextPageWalkthrough() at
sessionStorage.liril_autopilot.
Every visual presentation has an audio + text equivalent
Every [data-narrate] attribute carries a screen-reader-
friendly narration of the visual content. Voice synthesis (Clara via
LIRIL voice profile) reads sections aloud. Transcript side-panel
(T key) shows all narration as scrollable text for deaf
/ hard-of-hearing readers. Closed captions (C key)
display current sentence while speech plays. Reading time estimate
(~170 wpm × speed multiplier) tells users how long the page takes.
Privacy by design, not by policy
Zero third-party trackers. No Google Analytics. No
Facebook Pixel. No server-side open/click tracking on the .eml
campaigns. Page state (walkthrough progress, sent-campaign marks)
lives in localStorage on the user's device — never
transmitted anywhere. Even the LIRIL-authored-advice dashboard reads
from static JSON files, not a live API.
When LIRIL drafts, LIRIL is credited — and verified
Every LIRIL-drafted artifact (campaign letters, interpretations,
analyses) is attributed with subject (tenet5.liril.infer),
elapsed milliseconds, timestamp, and prompt. When LIRIL hallucinates,
the hallucination is documented on the roadmap before being fixed.
Every commit that LIRIL influenced is co-authored in the Git
metadata. Human review is the canonical gate — the
LIRIL roadmap's
4 red REJECTED rows (R5-Q3, R6 triple-rejection, etc.) show the
gate catching real errors.
Every change is atomic, Git-committed, and publishable
The whole site is a GitHub repo at
github.com/TENET-5/TENET-5.github.io. Every change is a
commit with a descriptive message. Every session's work is traceable.
Every dossier file's SHA-256 is recomputable. Every campaign's correction
history lives in the data file itself. Recovery from a bad state is
always a git revert away.
Anti-patterns — what TENET5 deliberately does NOT do
reading-path.html makes the whole-site scope knowable too.
Reading-time footprint
Total site narration: 76,954 words across 1,669 narration blocks
(measured by scripts/scan_narration_integrity.py). At 170 words/min,
that's ~7.5 hours of end-to-end voiceover. The
reading-path.html curated subset is ~45 minutes. The
state-of-investigation.html executive summary is ~8 minutes.
Using this as a template
To apply these pillars to your own investigation site:
- Fork github.com/TENET-5/TENET-5.github.io
- Replace
data/*_grover_decisionmakers.jsonwith your own axis dossiers - Replace page content; keep the structure (reading-path, axes-index, state-of-investigation)
- Keep the
shell.jsload chain as-is — it's the canonical walkthrough stack - Publish Merkle receipts from
tools/_merkle_*.pyscripts — all parameterized by SYSTEM_SEED - Open-source everything. Provenance is worthless if it's private.
SYSTEM_SEED 118400 · author: human · commit trail on GitHub