Every fact has a receipt. Every claim traces to who said it, when, and how confident they were. When a source is wrong, retract it — corroborated facts survive.
The most valuable knowledge in any organization — incident patterns, research hunches, tribal expertise — is contradictory, multi-source, and changes every week. Traditional databases can't hold it. Attest was built for it.
Ten questions other databases can't answer — because they don't track provenance structurally.
db.impact("paper_42") — If this source is retracted, what breaks? How many claims and entities depend on it?
A key journal retracts a paper your drug target evaluation depended on. In seconds, see every downstream conclusion that relied on it — and which ones survive on independent evidence.
db.blindspots() — Which entities are backed by only a single source? Where are you vulnerable?
Your team has 300 claims about kinase inhibitors and 3 about the metabolic pathway that might connect them. That's not just a gap — it's a research opportunity no single researcher would have noticed.
db.consensus("BRCA1") — How many sources agree? What's the agreement ratio across independent sources?
Before committing to a target, see exactly where the literature agrees, where it disagrees, and which disagreements are backed by stronger evidence.
db.fragile() — Find claims backed by a single source. These are your weakest links.
These are the facts your organization treats as settled that could collapse with a single retraction. Find them before they surprise you.
db.stale(days=90) — What hasn't been corroborated or updated recently? Time-aware knowledge hygiene.
Knowledge decays. A claim from 2022 with no recent corroboration is a liability, not an asset.
db.audit(claim_id) — Full provenance chain for any claim: who said it, what corroborates it, what depends on it.
When a regulator asks “how did you arrive at this conclusion?”, the answer isn't a narrative reconstruction — it's a queryable graph of every piece of evidence, who provided it, and whether any of it has been retracted since.
db.drift(days=30) — How has your knowledge changed? New claims, new entities, retracted sources, confidence trends.
See what your organization learned this month, what it unlearned, and where confidence shifted — without reading a single Slack thread.
db.source_reliability() — Per-source corroboration and retraction rates. Which sources can you trust?
After 6 months, the system knows which sources consistently get corroborated and which ones get contradicted. Trust becomes empirical, not reputational.
db.hypothetical(claim) — Would this claim corroborate existing knowledge? Does it fill a gap?
Before running an experiment, see how the result would fit into your existing knowledge topology. Prioritize experiments that resolve the most uncertainty.
db.on("claim_retracted", notify_downstream) — Event hooks that fire when knowledge changes.
When a claim about drug toxicity is retracted, automatically notify every agent and system that used it. Knowledge changes propagate — they don't silently go stale.
Attest was built by Omic, an AI drug discovery company. We use it internally to track findings across thousands of papers, internal assays, and agent-generated research. The result: contradictions surface automatically, retracted sources are traced in seconds, and research gaps become visible before they become costly.
Watch knowledge build up from multiple sources — then see what happens when one turns out to be wrong. Each scenario auto-plays and loops.
"The East Palestine water is contaminated." Here's how three different databases store that:
When an agent extracts 500 facts from your Slack channels overnight, you need every write to carry its source. Not as metadata you hope someone fills in — as a hard requirement the engine enforces. If two sources contradict each other, both claims coexist. When one is discredited, you retract it and the other survives.
This is the part that matters most.
Run Attest for a week and you have structured notes. Run it for six months and you have a reality model — an emergent map of everything your organization knows, where the knowledge is deep, where it's thin, and where different domains connect in ways no single person noticed.
Topics emerge automatically from the claim graph. Your agents extracted 300 claims about your auth system and 4 about data center power capacity. That's not just data — that's a map of where you're knowledgeable and where you're blind. The insight engine finds connections across domain boundaries: auth failures always follow database connection issues, but nobody documented the dependency.
Two years of accumulated evidence can't be speed-run by a competitor. The topology gets richer. Cross-domain connections surface. The organization that started earlier has an advantage that compounds daily and is nearly impossible to replicate.
pip install attestdb — single-file database, no server, no infrastructure.
Point it at a Slack export, a ChatGPT conversation, or a folder of documents —
the built-in extractor (heuristic or LLM-powered, 7 providers supported)
pulls out claims with provenance tracing to the exact message.
Heuristic mode needs no API keys.
Want a visual interface? pip install attest-console gives you a browser
dashboard that connects to live Slack, Gmail, and Google Docs via OAuth.
Ingest your company's knowledge, explore an interactive graph, and ask natural-language
questions — all for ~$0.06 on Groq's free tier.
Provenance is required on every write — the engine rejects claims without a source,
whether the writer is a human or an agent. Batch mode handles millions of claims
via the Rust backend. db.at(timestamp) gives you point-in-time queries —
what the agents knew last Tuesday, before the new data came in.
For AI agents: the built-in MCP server lets Claude and other
MCP-compatible agents read and write Attest directly. A REST API at /api/v1/
serves any HTTP client. Event hooks (db.on("claim_ingested", callback)) let you
build reactive pipelines that trigger when knowledge changes.
An agent reads 200 papers overnight and extracts findings. It notices your team has deep knowledge on kinase inhibition and almost nothing on the metabolic pathway that might connect to it — a gap no single researcher would see.
Use db.blindspots() to find the research gaps hiding between well-covered domains. Use db.hypothetical() to prioritize experiments that resolve the most uncertainty.
Agents ingest 18 months of Slack incident channels and postmortem docs. "What breaks if Redis goes down?" — answered from claims extracted across hundreds of incidents, each traceable to the person who figured it out.
Use db.impact() to answer “what breaks if this system goes down?” from 18 months of sourced incident data. Use db.source_reliability() to know which documentation you can actually trust.
If your agents produce knowledge — from customer calls, experiments, market research, code reviews — Attest is the database layer that makes it compound instead of evaporate.
The MCP server lets Claude and other agents read and write attested claims directly. Your agents' knowledge compounds instead of evaporating between sessions.
$ pip install attestdb attest-console $ attest-console my_company.db
Opens a dashboard at localhost:8877.
Click Connect Slack — authorize your workspace.
Click Connect Google — authorize Gmail, Drive, Docs.
Go to Ingest. Pick your channels. Hit go.
No API keys. No OAuth apps to create. No environment variables. Your data flows directly between your machine and Slack/Google. Full quick start guide →