Knowledge reasoning infrastructure for AI — structure documents into a temporal knowledge graph, reason across them, and trace every answer to specific facts
Project description
VRIN — Knowledge Reasoning Infrastructure for AI
The reasoning engine for AI that needs to be right.
VRIN structures your documents into a temporal knowledge graph and reasons across them — giving AI answers you can trace to specific facts.
| Benchmark | VRIN | Best Baseline | Gap |
|---|---|---|---|
| MultiHop-RAG | 95.1% | 78.9% (GPT 5.2 w/ same docs) | +16.2pp |
| MuSiQue | 28% more accurate | HippoRAG 2 (academic SOTA) | +10.6 EM |
| FinQA | 71.4% | 11.1% (vector-only retrieval) | +640% |
Installation
pip install vrin
Quick Start
from vrin import VRINClient
client = VRINClient(api_key="vrin_your_api_key")
# Insert knowledge
client.insert("ACME Corp reported $50M revenue in Q4 2025.", title="ACME Financials")
# Query — answers traced to specific facts
result = client.query("What is ACME's revenue?")
print(result["summary"])
Bulk Ingestion with Adaptive Concurrency
Ingest hundreds of documents with automatic concurrency control. The SDK uses Netflix's Gradient2 algorithm to monitor backend latency and adjust parallelism in real time — ramping up when healthy, backing off when congested, retrying failures with exponential backoff, and running a sequential recovery pass for any remaining items. Zero information loss guaranteed.
items = [
{"content": "Apple reported $416B revenue in FY2025.", "title": "AAPL 10-K"},
{"content": "Microsoft Cloud surpassed $50B quarterly.", "title": "MSFT Earnings"},
# ... hundreds more
]
result = client.bulk_insert(items)
print(f"Ingested {result['completed']}/{result['total']} — "
f"{result['facts_stored']} facts stored")
Streaming
for token in client.query("Summarize Q4 results", stream=True):
print(token, end="", flush=True)
Query Modes
result = client.query(
"Compare ACME and Widget Corp revenues",
response_mode="research", # "chat" | "thinking" | "research"
query_depth="research", # "basic" | "thinking" | "research"
)
File Upload
client.upload_file("report.pdf", save_to_memory=True)
Conversations
client.start_conversation()
r1 = client.continue_conversation("What was ACME's Q4 revenue?")
r2 = client.continue_conversation("How does that compare to Q3?") # has context
client.end_conversation()
MCP Integration
VRIN exposes an MCP (Model Context Protocol) server so any compatible AI assistant can query your knowledge base — Claude Code, Claude Desktop, Cursor, Windsurf, or custom agents.
Tools exposed: vrin_query_async, vrin_check_job, vrin_search_entities, vrin_get_facts
Enterprise — Your Data Stays in Your Cloud
Enterprise API keys (vrin_ent_*) route all data through your own AWS/Azure account. Your knowledge graph, your vector store, your encryption keys. Data never touches our infrastructure.
from vrin import VRINEnterpriseClient
client = VRINEnterpriseClient(api_key="vrin_ent_your_key")
result = client.query("What is our Q4 revenue?")
Three deployment modes: VRIN Cloud, Hybrid Cloud (your data, our compute), Private VPC (everything in your account).
What Makes VRIN Different
| Vector-Only Retrieval | VRIN | |
|---|---|---|
| Answers | Similar-looking text chunks | Specific facts, traced to sources |
| Temporal | None — returns latest by similarity | Bi-temporal versioning: "What was true in Q3?" |
| Cross-document | Concatenates chunks | Traverses entity relationships across documents |
| Numbers | LLM interprets from raw text | Constraint extraction + validated numerical fields |
| Aggregation | LLM must infer | Explicit sum/count with calculation steps |
| Audit trail | Chunk-level at best | Fact-level with document, confidence, and provenance |
License
MIT License — see LICENSE file for details.
Built by the VRIN Team | vrin.cloud | support@vrin.cloud
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vrin-1.3.3.tar.gz.
File metadata
- Download URL: vrin-1.3.3.tar.gz
- Upload date:
- Size: 116.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4c6078aa4b94d73792175aaaf56136290adccc85d3a509ed10f69be4be66a6fd
|
|
| MD5 |
af79a827a92c643bb7c98744e6b532a6
|
|
| BLAKE2b-256 |
9d6a896f3f24bf4fa228e0e8a0766c223878ea064e11f7653456e2849fb0f4e9
|
File details
Details for the file vrin-1.3.3-py3-none-any.whl.
File metadata
- Download URL: vrin-1.3.3-py3-none-any.whl
- Upload date:
- Size: 135.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c4cc8c28817231e22962305a6a26aa87de7cb411c06a57934639fc3be8c137a1
|
|
| MD5 |
fdd04349a3416eadd71d9958f2b2ae2a
|
|
| BLAKE2b-256 |
b33528985bdc463c8592677814ed0abe4987b6ab9362313bfdcda6142f7837b6
|