Where did the agent get this?
Compliance asks. You have a model name, a transcript, and three log files in three places. The retrieval set that produced the answer was discarded after the request.
Reproducibility
For teams shipping AI agents into work that gets audited. A self-hosted memory layer where every output is reproducible from stored evidence, every claim cites its source, and every change is verifiable. One binary. Zero cloud.
Every state-of-the-art agent memory layer treats the model as a trusted writer. The data model assumes good faith. So when the obvious question arrives, you can't answer it in less than four hours.
Where did the agent get this?
Compliance asks. You have a model name, a transcript, and three log files in three places. The retrieval set that produced the answer was discarded after the request.
Reproducibility
Was that protocol still current?
The agent cited a guideline. It was correct three months ago and superseded last week. Your memory layer has no concept of valid-from and valid-until. The wrong version sat in retrieval.
Time
Did the agent see the contradiction?
Two notes in different sessions said different things. The agent silently used whichever was retrieved last. There was no signal that the conflict existed.
Conflict
"The transcript is not the audit trail. The audit trail is the chain of evidence that produced the transcript."
Field note · From a discovery call with a clinical-AI team, 2026
Wired into the storage layer so they can't be opted out of. Each one replaces an afternoon of manual reconstruction with a single query.
Tamper-evident change log
Each note, each edit, each model call gets a fingerprint linked to the previous one. If anything was tampered with, the chain shows you exactly where. Your audit trail becomes a chain you can verify — not a story you have to tell.
Memory that earns its place
Notes earn durability through repeated, time-spread access. Old + replayed = important. Old + abandoned = fades. The graph stops bloating. The most-cited knowledge consolidates into durable schemas.
Citations the system enforces
A claim that doesn't textually overlap with a cited source is rejected at write time. Not "logged" — rejected. The model can't quietly invent a citation, because the storage layer won't accept one.
Every link carries valid-from and valid-until. Stale guidance can't quietly resurface.
When two notes disagree, the system tells you. No silent overwrites.
Frequently-accessed knowledge hardens into durable schemas. Abandoned notes fade.
Same model, same prompt, same retrieval set — bit-identical answer. By design, not by hope.
A 30-megabyte program plus a database file. Air-gappable. Deployable anywhere.
Smriti runs on your hardware, against your database file. No vendor in the middle. No telemetry.
Measured benchmarks. Peer-reviewed research. No demo magic. Every claim on this page is verifiable against the codebase.
$ smriti verify --chain ─── chain integrity ────────────────────────── events scanned 50,127 chain integrity ✓ verified prior tampering ✓ none detected last event 2026-05-08T14:21:33Z walk time 487 ms $ smriti audit replay call_8f7a02e1b ─── reproducibility check ──────────────────── model llama3.2:1b seed 42 prompt template clinical_qa@v3 retrieval set [ n_s3_1, n_s4_1, n_s5_1 ] stored hash a3f2…0e1b re-run hash a3f2…0e1b match ✓ bit-identical $
In customer discovery, not selling. Twenty minutes; you talk, we listen.
Or email hi@smriti.dev — same destination, no form.