Use Case

The Integrity Layer
for AI-Driven Knowledge

As organizations adopt AI assistants for research, compliance, and clinical operations, a new risk emerges: knowledge that cannot be verified. Smriti is the infrastructure layer that makes every AI-generated claim traceable, every change auditable, and every contradiction visible.

Single binary — no cloud SQLite — one file Local-first — data stays on-premise MCP-native — plugs into any LLM Open source — Apache 2.0
The Shift
AI Is Writing Your Organization's Knowledge.
Who Verifies It?
Every enterprise is adopting AI for documentation, research, and compliance. But the tools that store knowledge — wikis, note apps, databases — were built before AI could author content. They have no mechanism to verify what the AI wrote.

The Problem

AI assistants generate plausible-sounding content that can contain fabricated dates, invented citations, and hallucinated facts. In regulated industries, a single unverified claim can trigger audits, regulatory holds, or litigation. Current knowledge tools — Notion, Confluence, OneNote, SharePoint — treat every write as equally trustworthy. They have no provenance, no contradiction detection, and no audit trail that survives scrutiny.

The Smriti Approach

Smriti enforces integrity at the point of write, not after the fact. Every claim must cite its source. Every change is hash-chained. Every contradiction is surfaced before it reaches a reviewer. One command verifies the entire knowledge base in seconds. This is not a feature bolted onto a note app — it is a purpose-built integrity layer for the age of AI-generated knowledge.

3.2s
Full Integrity Sweep
vs hours of manual review
235ns
Graph Traversal
BFS depth-2, in-memory
2.5µs
Key-Value Retrieval
Agent memory lookup
0
Cloud Dependencies
Fully local, fully offline
1
Binary to Deploy
60-second install, any OS
Core Capabilities
Four Integrity Primitives, One Binary
Each capability maps to a specific class of risk that AI-generated knowledge introduces. Together, they form a defense-in-depth layer that existing tools do not provide.

Bi-Temporal Versioning

Every note records which version of a policy, protocol, or standard was active at the time it was written. When rules change, historical notes retain their original context — not the current one. Eliminates retroactive misattribution across regulatory, legal, and compliance workflows.

Maps to: 21 CFR 312.32 · SOX temporal accuracy · Legal privilege dating
🔍

Enforced Provenance

Every claim in the knowledge base must cite a source. Smriti measures the structural overlap between each claim and its cited source using FACTUM scoring. Claims with weak grounding are flagged automatically. AI-generated content without source attribution is rejected at write time.

Maps to: ICH E6(R3) data integrity · FACTUM arXiv:2601.05866

Contradiction Detection

When new information conflicts with existing knowledge, Smriti surfaces the contradiction with a confidence score — but never auto-resolves it. Contradictions land in a human review inbox. This prevents the silent overwrite problem where AI edits erase correct information with plausible-sounding errors.

Maps to: AGM belief revision arXiv:2603.17244 · CIOMS reconciliation
🔒

Cryptographic Audit Trail

Every write — create, edit, link, resolve — appends an event with a SHA-256 hash of the previous event. If any record is tampered with, the chain breaks at the exact point of alteration. One command walks the entire chain and reports integrity status. This is the technical substrate for compliance frameworks that require tamper-evident records.

Maps to: 21 CFR Part 11 · SOC 2 audit evidence · eIDAS qualified logs
Market Application
One Binary, Five High-Value Verticals
The same four integrity primitives address verifiability needs across industries where knowledge accuracy has financial or regulatory consequences.
Clinical Trials
Protocol amendments tracked bi-temporally. SAE narratives grounded to source notes. Monitor visit prep in 4 seconds. Investigator site notebooks with audit trails that meet FDA intent.
amended_by reports_ae deviates
Investment Research
10-K filings, earnings transcripts, and sell-side reports linked with temporal provenance. Detect when revised guidance contradicts prior analyst consensus. Audit trail for investment committee memos.
revised_guidance contradicts cites_filing
Legal Research
Case law citations verified against source. Precedent chains tracked across jurisdictions. Draft briefs grounded to actual holdings. Privilege-dated notes that survive discovery.
cites_precedent overrules distinguishes
Academic Research
Literature reviews with enforced citation provenance. Hypothesis evolution tracked bi-temporally. Lab notebook integrity for reproducibility. The "11pm advisor text" problem solved structurally.
supports refutes replicates
Internal Ops & BI
Decision memos linked to the data that informed them. Post-mortem reports with hash-chained timelines. Competitive intelligence with source attribution. SOC 2 audit evidence by construction.
informed_by supersedes escalated_to
Compliance
Regulatory Alignment by Design
Smriti is not "certified" for any framework out of the box. But every feature provides the technical substrate that these frameworks require — reducing the validation effort from building infrastructure to documenting it.
Smriti Primitive Regulatory Framework What It Provides
Bi-temporal edges 21 CFR 312.32 · SOX Provable answer to "what did the rules say when this decision was made"
SHA-256 hash chain 21 CFR Part 11 · SOC 2 Tamper-evident, append-only audit trail with cryptographic binding
FACTUM provenance ICH E6(R3) · Fair disclosure Measurable overlap score between every claim and its cited source
Contradiction inbox CIOMS · GAAP restatement Conflicting facts surface for human review, never auto-resolved
Integrity sweep Pre-audit preparation One command checks every claim, link, hash, and contradiction in seconds
Go-to-Market
Individual Adoption to Enterprise Contract
Smriti does not require an enterprise sale to generate value. It starts with one person on one laptop and expands from there.
1

Individual Practitioners — Day 1

Researchers, coordinators, analysts install a single binary. No IT ticket. No cloud account. No data leaves their machine. Free, open-source. TAM: 50M+ knowledge workers globally.

2

Team & Department — Month 6

A coordinating center, research lab, or investment team aggregates knowledge across members via WebDAV sync. Nightly integrity sweeps. First paid tier: $50–100/seat/month.

3

Enterprise — Month 18

Validated build for sponsors, law firms, or fund compliance. The pitch writes itself: "Your people already use this." Enterprise tier: $200–500/seat/month. IQ/OQ/PQ package included.

See It in Action: Clinical Trial Operations Demo

An interactive 6-step walkthrough showing how a site coordinator uses Smriti to handle protocol versioning, catch AI hallucinations in safety narratives, resolve sponsor queries, and run a pre-monitor integrity sweep — all with real synthetic trial data loaded into a live Smriti instance.

Launch Demo →
Defensibility
Why This Wins Now
Three structural tailwinds that did not exist 18 months ago.

AI Grounding Is the New Requirement

Every organization adopting AI for documentation faces the same question: "how do we prove the AI didn't hallucinate?" Smriti's provenance scoring provides a structural, measurable answer. This need is net-new and growing with every AI deployment.

Local-First Is the Compliance Shortcut

Cloud AI tools face BAA/DPA review cycles of 6–12 months in regulated industries. Smriti runs entirely on-premise with zero cloud dependencies. Data never leaves the machine. This eliminates the single longest blocker in enterprise adoption of AI-adjacent tooling.

Graph + Integrity Is Unoccupied

Obsidian has the graph, but no integrity. Mem0 has the agent memory, but no provenance. Neo4j has the database, but requires a server. Smriti is the only tool that combines a knowledge graph, agent memory, enforced provenance, and a cryptographic audit trail in a single local binary. The combination is the moat.

Research Foundation
Peer-Reviewed Design Decisions
Every integrity feature traces to published research. This is not a prototype — it is a deliberate implementation of verified approaches.
CapabilityResearch BasisKey Finding
Bi-temporal edges Zep / Graphiti (arXiv:2501.13956) 18.5% improvement on LongMemEval with temporal awareness
Provenance scoring FACTUM (arXiv:2601.05866) Structural citation verification without LLM-as-judge
Contradiction detection MemoTime (arXiv:2510.13614) Confidence-weighted scoring prevents silent overwrites
Conflict resolution AGM Belief Revision (arXiv:2603.17244) Formal postulates for knowledge base contraction and revision
Hybrid retrieval Graph Memory Survey (arXiv:2602.05665) Graph + BM25 hybrid outperforms pure vector for multi-hop tasks
Typed graph layers MAGMA (arXiv:2601.03236) Semantic/temporal/causal layers reduce token usage by 95%

Knowledge Your Organization
Can Actually Verify

Single binary. Zero cloud. Enforced provenance. Cryptographic audit trail. The integrity layer for the next generation of AI-driven workflows.