An interactive walkthrough of how Smriti solves the three biggest problems clinical research coordinators face every week. Real scenarios, real data, real commands.
Maria Santos is a Clinical Research Coordinator at University Medical Center. She manages 8 concurrent oncology, cardiology, and neurology trials. Here's what's loaded in her Smriti instance right now.
We've loaded 10 notes across 3 active trials, plus agent memory tracking Maria's pending actions. This is a fraction of what a real CRC's notebook looks like, but it's enough to demonstrate every integrity primitive.
| Trial | Drug | Phase | Patients | Notes Loaded | Status |
|---|---|---|---|---|---|
| Trial-A: ONCORIX | Immunotherapy | Phase III | 16 | 5 (protocols, screening, labs, visit prep) | Enrolling |
| Trial-B: CARDIOGUARD | Cardiac | Phase II | 28 | 4 (visit notes, SAE report, AI draft) | SAE pending |
| Trial-C: NEUROBALANCE | CNS | Phase III | 22 | 2 (screening, sponsor query) | Query open |
URGENT: Trial-B SAE narrative due TODAY — AI draft has potential errors
HIGH: Trial-A monitor visit Thursday — need integrity sweep
HIGH: Trial-C sponsor query on Patient 22 MMSE score — response due Wednesday
The monitor is coming Thursday. She'll ask about Patient 14, who was enrolled 4 days before the protocol amendment. Maria needs to prove she used the right version.
Patient 14 was screened on March 10, 2026. The protocol was amended from v2.1 to v2.3 on March 14, 2026. The sodium inclusion range changed from 136-145 to 130-150 mEq/L. Patient 14's sodium was 142 — within both ranges, but the monitor needs to see which version Maria was actually following.
Maria opens her OneNote, finds the screening note, but it just says "met all criteria." She digs through the paper regulatory binder to find when v2.3 was implemented. She cross-references the date against the EDC entry timestamp. This takes 2 hours.
Maria's notes are already linked with bi-temporal edges. Smriti knows Protocol v2.1 was valid from 2025-11-01 to 2026-03-14, and v2.3 from 2026-03-14 onwards. The screening note date (March 10) falls within v2.1's validity window.
2 hours digging through binders. Risk of misremembering which version was active. Monitor may still issue a finding.
3 seconds. The temporal link proves which version was active on that date. Lab report linked as source document. Monitor question answered before she sits down.
An AI assistant drafted an SAE narrative for Patient 7. It hallucinated a medication start date. If this reaches the FDA, the trial could face a regulatory hold.
Patient 7 had a serious GI bleed. Maria asked an AI to draft the safety narrative. The AI wrote: "Patient started aspirin 81mg on February 15, 2026." But Maria's visit notes from November 2025 clearly say: "Patient continues aspirin 81mg daily — ongoing since 2025-05-12."
The patient was on aspirin for 9 months before the AI's claimed start date. This matters because the duration of aspirin use directly affects causality assessment for GI bleeding.
"Subject 7 is a 62-year-old male... The patient started aspirin 81mg on February 15, 2026 for cardiovascular prophylaxis... The patient was started on metformin 500mg on March 1, 2026 for newly diagnosed type 2 diabetes. No other concomitant medications were reported."
"Aspirin 81mg PO daily — ongoing since 2025-05-12 (cardiovascular prophylaxis)... Metformin 500mg PO BID — ongoing since 2024-08-01 (type 2 diabetes)... Lisinopril 10mg PO daily... Atorvastatin 20mg PO daily"
Maria reads the AI draft, it "sounds right," she submits it. FDA reviewer catches the wrong aspirin duration during causality review. Trial faces a regulatory hold. 3-6 month delay. Millions of dollars.
Three contradictions flagged in seconds. Maria corrects the start dates, adds the missing medications, and submits a clean narrative. The causality assessment is accurate. No regulatory risk.
The sponsor asks about a MMSE score discrepancy for Patient 22. Maria needs to provide the full chain of evidence — fast.
The sponsor's medical monitor sees that the EDC shows MMSE score 25 for Patient 22, but the eTMF has a document showing score 22. They want an explanation. Maria has until Wednesday.
"Patient 22 had two MMSE assessments during screening. The initial score of 22 (Jan 15) did not meet inclusion. Per protocol Section 4.2, one repeat assessment is permitted within the screening window. The repeat MMSE on Jan 18 scored 25, meeting the criterion. The EDC correctly reflects the qualifying score. The eTMF upload error has been identified — the first (non-qualifying) MMSE form was uploaded instead of the repeat. We are uploading the correct form today."
Thursday morning. The monitor arrives at 9 AM. Maria runs one command at 8:55 AM.
Maria has fixed the SAE narrative contradictions (Step 2) and resolved the Patient 22 query (Step 3). Now she needs to confirm that everything across Trial-A is clean before the monitor sits down. Every claim, every link, every hash.
Maria spends 3-5 hours the night before manually reviewing binders, cross-checking EDC entries against source docs, and hoping she didn't miss anything. She still gets a finding on the one inconsistency she couldn't see.
3.2 seconds. Every claim verified against source. Hash chain intact. Zero open contradictions. Maria walks into the visit at 9 AM knowing exactly what the monitor will find: nothing.
Same three problems. Same deadlines. Fundamentally different outcomes.
| Problem | Without Smriti | With Smriti | Time Saved |
|---|---|---|---|
| Protocol version lookup | 2 hours in binders | 3 seconds, one command | ~2 hours |
| SAE narrative review | Hallucination reaches FDA | 3 contradictions caught instantly | Prevented regulatory hold |
| Sponsor query response | 45 min reconstructing chain | 30 seconds with full evidence | ~44 minutes |
| Monitor visit prep | 3-5 hours, still misses things | 3.2 seconds, catches everything | ~4 hours |
Bi-temporal edges — Every note knows which protocol version was active on its date. The monitor question answers itself.
Contradiction detection — AI-generated claims are checked against real source notes before submission. Wrong dates and missing medications are caught, not submitted.
Knowledge graph — Notes link to protocols, lab reports, and each other. A sponsor query that used to require manual reconstruction now returns a full evidence chain in one search.
Integrity sweep — One command checks every claim, every link, every hash. The CRC knows the state of her data before the monitor does.
Agent memory — Pending actions, trial status, and priorities are tracked as structured data. The AI assistant knows Maria's context without hallucinating it.
This entire demo ran on a single binary with a SQLite database on Maria's laptop. No cloud. No IT ticket. No patient data left the machine. No API keys. Install time: 60 seconds.
Multiply Maria by 50,000 CRCs at US academic medical centers. Each running 5-15 concurrent trials. Each losing 2-5 hours per week on exactly these reconciliation tasks. That's the wedge.
The same binary — the same four integrity primitives — serves investment research, legal research, academic research, and internal ops. Clinical trials is the highest-revenue vertical ($200-500/seat/month). But the same code runs everywhere notes need to be verifiable.