As organizations adopt AI assistants for research, compliance, and clinical operations, a new risk emerges: knowledge that cannot be verified. Smriti is the infrastructure layer that makes every AI-generated claim traceable, every change auditable, and every contradiction visible.
AI assistants generate plausible-sounding content that can contain fabricated dates, invented citations, and hallucinated facts. In regulated industries, a single unverified claim can trigger audits, regulatory holds, or litigation. Current knowledge tools — Notion, Confluence, OneNote, SharePoint — treat every write as equally trustworthy. They have no provenance, no contradiction detection, and no audit trail that survives scrutiny.
Smriti enforces integrity at the point of write, not after the fact. Every claim must cite its source. Every change is hash-chained. Every contradiction is surfaced before it reaches a reviewer. One command verifies the entire knowledge base in seconds. This is not a feature bolted onto a note app — it is a purpose-built integrity layer for the age of AI-generated knowledge.
Every note records which version of a policy, protocol, or standard was active at the time it was written. When rules change, historical notes retain their original context — not the current one. Eliminates retroactive misattribution across regulatory, legal, and compliance workflows.
Every claim in the knowledge base must cite a source. Smriti measures the structural overlap between each claim and its cited source using FACTUM scoring. Claims with weak grounding are flagged automatically. AI-generated content without source attribution is rejected at write time.
When new information conflicts with existing knowledge, Smriti surfaces the contradiction with a confidence score — but never auto-resolves it. Contradictions land in a human review inbox. This prevents the silent overwrite problem where AI edits erase correct information with plausible-sounding errors.
Every write — create, edit, link, resolve — appends an event with a SHA-256 hash of the previous event. If any record is tampered with, the chain breaks at the exact point of alteration. One command walks the entire chain and reports integrity status. This is the technical substrate for compliance frameworks that require tamper-evident records.
| Smriti Primitive | Regulatory Framework | What It Provides |
|---|---|---|
| Bi-temporal edges | 21 CFR 312.32 · SOX | Provable answer to "what did the rules say when this decision was made" |
| SHA-256 hash chain | 21 CFR Part 11 · SOC 2 | Tamper-evident, append-only audit trail with cryptographic binding |
| FACTUM provenance | ICH E6(R3) · Fair disclosure | Measurable overlap score between every claim and its cited source |
| Contradiction inbox | CIOMS · GAAP restatement | Conflicting facts surface for human review, never auto-resolved |
| Integrity sweep | Pre-audit preparation | One command checks every claim, link, hash, and contradiction in seconds |
Researchers, coordinators, analysts install a single binary. No IT ticket. No cloud account. No data leaves their machine. Free, open-source. TAM: 50M+ knowledge workers globally.
A coordinating center, research lab, or investment team aggregates knowledge across members via WebDAV sync. Nightly integrity sweeps. First paid tier: $50–100/seat/month.
Validated build for sponsors, law firms, or fund compliance. The pitch writes itself: "Your people already use this." Enterprise tier: $200–500/seat/month. IQ/OQ/PQ package included.
An interactive 6-step walkthrough showing how a site coordinator uses Smriti to handle protocol versioning, catch AI hallucinations in safety narratives, resolve sponsor queries, and run a pre-monitor integrity sweep — all with real synthetic trial data loaded into a live Smriti instance.
Every organization adopting AI for documentation faces the same question: "how do we prove the AI didn't hallucinate?" Smriti's provenance scoring provides a structural, measurable answer. This need is net-new and growing with every AI deployment.
Cloud AI tools face BAA/DPA review cycles of 6–12 months in regulated industries. Smriti runs entirely on-premise with zero cloud dependencies. Data never leaves the machine. This eliminates the single longest blocker in enterprise adoption of AI-adjacent tooling.
Obsidian has the graph, but no integrity. Mem0 has the agent memory, but no provenance. Neo4j has the database, but requires a server. Smriti is the only tool that combines a knowledge graph, agent memory, enforced provenance, and a cryptographic audit trail in a single local binary. The combination is the moat.
| Capability | Research Basis | Key Finding |
|---|---|---|
| Bi-temporal edges | Zep / Graphiti (arXiv:2501.13956) | 18.5% improvement on LongMemEval with temporal awareness |
| Provenance scoring | FACTUM (arXiv:2601.05866) | Structural citation verification without LLM-as-judge |
| Contradiction detection | MemoTime (arXiv:2510.13614) | Confidence-weighted scoring prevents silent overwrites |
| Conflict resolution | AGM Belief Revision (arXiv:2603.17244) | Formal postulates for knowledge base contraction and revision |
| Hybrid retrieval | Graph Memory Survey (arXiv:2602.05665) | Graph + BM25 hybrid outperforms pure vector for multi-hop tasks |
| Typed graph layers | MAGMA (arXiv:2601.03236) | Semantic/temporal/causal layers reduce token usage by 95% |
Single binary. Zero cloud. Enforced provenance. Cryptographic audit trail. The integrity layer for the next generation of AI-driven workflows.