Composable Agent Primitives — build your agent system, not the plumbing around it
Project description
Nerva
Composable Agent Primitives
Build your agent system — not the plumbing around it.
What is Nerva?
Nerva is a library — not a server, not a framework. It provides 8 composable primitives that every agent orchestrator needs: routing, execution, tools, memory, response formatting, registry, and policy enforcement. Use one primitive or all eight. Replace any piece with your own implementation. Nerva runs inside your existing web framework (FastAPI, NestJS, Express) and stays invisible to your API consumers.
The 8 Primitives
| # | Primitive | What it does |
|---|---|---|
| 0 | ExecContext | Carries request identity, permissions, memory scope, and tracing through the entire call chain |
| 1 | Router | Classifies intent and selects the right agent/handler (embedding, LLM, rule-based, or hybrid) |
| 2 | Runtime | Executes agent code with isolation, timeouts, circuit breakers, and streaming |
| 3 | Tools | Discovers, sandboxes, and invokes external tools (MCP servers, plain functions) |
| 4 | Memory | Tiered context storage — hot (session), warm (episodes/facts), cold (vector search) |
| 5 | Responder | Formats agent output for the target channel and tone |
| 6 | Registry | Unified catalog of agents, tools, and plugins — register, discover, health-check |
| 7 | Policy | Declarative safety, permissions, rate limits, cost budgets, and approval gates |
Each primitive is a Protocol (Python) / interface (TypeScript) with a default implementation. Swap any piece without touching the rest.
The Gap
SDKs are too thin. Pydantic AI, LiteLLM, and similar libraries give you LLM calls and not much else. You still build routing, memory, tool management, and lifecycle from scratch every time.
Frameworks own your architecture. LangGraph, CrewAI, and AutoGen give you everything — but on their terms. Swap a component and you fight the framework.
Nerva is the middle ground. Opinionated primitives, zero opinions on how you compose them. No base classes to inherit, no lifecycle you did not ask for, no magic graph runtime.
Quick Start — Python
pip install otomus-nerva
import asyncio
from nerva import Orchestrator, ExecContext
from nerva.router import RuleRouter, Rule
from nerva.runtime import InProcessRuntime
from nerva.tools import FunctionToolManager
from nerva.memory import TieredMemory
from nerva.responder import PassthroughResponder
from nerva.registry import InMemoryRegistry
from nerva.policy import NoopPolicyEngine
# Define a simple handler
async def greet_handler(input_text: str, ctx: ExecContext) -> str:
return f"Hello! You said: {input_text}"
# Wire primitives together
orchestrator = Orchestrator(
router=RuleRouter(rules=[
Rule(pattern=r".*", handler="greet", description="Catch-all greeter"),
]),
runtime=InProcessRuntime(handlers={"greet": greet_handler}),
tools=FunctionToolManager(),
memory=TieredMemory(),
responder=PassthroughResponder(),
registry=InMemoryRegistry(),
policy=NoopPolicyEngine(),
)
async def main():
ctx = ExecContext.create(user_id="user_1")
result = await orchestrator.handle("What's the weather?", ctx)
print(result.text)
asyncio.run(main())
Quick Start — TypeScript
npm install @otomus/nerva
import {
Orchestrator,
ExecContext,
RuleRouter,
FunctionToolManager,
TieredMemory,
InMemoryRegistry,
NoopPolicyEngine,
} from "@otomus/nerva";
// Define a simple handler
async function greetHandler(input: string, ctx: ExecContext): Promise<string> {
return `Hello! You said: ${input}`;
}
// Wire primitives together
const orchestrator = new Orchestrator({
router: new RuleRouter({
rules: [{ pattern: /.*/, handler: "greet", description: "Catch-all greeter" }],
}),
runtime: { invoke: async (handler, input, ctx) => ({ text: await greetHandler(input.query, ctx), status: "success" }) },
tools: new FunctionToolManager(),
memory: new TieredMemory(),
responder: { format: async (output, channel, ctx) => ({ text: output.text }) },
registry: new InMemoryRegistry(),
policy: new NoopPolicyEngine(),
});
async function main() {
const ctx = ExecContext.create({ userId: "user_1" });
const result = await orchestrator.handle("What's the weather?", ctx);
console.log(result.text);
}
main();
CLI
Scaffold new projects and generate components:
# Create a new agent project
npx nerva new my-agent --lang python
npx nerva new my-agent --lang typescript
# Generate components inside a project
npx nerva generate agent weather
npx nerva generate tool search
npx nerva generate router custom
npx nerva generate middleware logging
Framework Integration
Nerva is a library layer — like React, not Next.js. It runs inside your web framework.
FastAPI / NestJS / Express (HTTP, auth, sessions, swagger, CORS)
└── Nerva (agent orchestration: routing, runtime, tools, memory, policy)
└── LLM providers, MCP servers, subprocess agents
FastAPI
from fastapi import FastAPI, Depends
from nerva import Orchestrator, ExecContext
from nerva.contrib.fastapi import get_nerva_ctx
app = FastAPI(title="My Agent API")
orchestrator = build_orchestrator()
@app.post("/chat", response_model=ChatResponse)
async def chat(req: ChatRequest, ctx: ExecContext = Depends(get_nerva_ctx)):
response = await orchestrator.handle(req.message, ctx)
return ChatResponse(text=response.text, tokens=ctx.token_usage.total)
NestJS
import { Controller, Post, Body, UseGuards } from '@nestjs/common';
import { Orchestrator } from '@otomus/nerva';
import { NervaCtx } from '@otomus/nerva/contrib/nestjs';
@Controller('chat')
export class ChatController {
constructor(private orchestrator: Orchestrator) {}
@Post()
@UseGuards(JwtAuthGuard)
async chat(@Body() dto: ChatDto, @NervaCtx() ctx: ExecContext) {
return this.orchestrator.handle(dto.message, ctx);
}
}
Express
import express from 'express';
import { Orchestrator } from '@otomus/nerva';
import { nervaMiddleware } from '@otomus/nerva/contrib/express';
const app = express();
app.use(nervaMiddleware(config));
app.post('/chat', async (req, res) => {
const response = await orchestrator.handle(req.body.message, req.nervaCtx);
res.json(response);
});
Comparison
| Composable | Router | Tools | Memory | Policy | Streaming | Server ownership | |
|---|---|---|---|---|---|---|---|
| Nerva | Yes — use any piece | Embedding + LLM + rules | MCP + functions | Hot/warm/cold tiers | Declarative YAML | Built-in | Your framework |
| LangGraph | No — full graph | Graph edges | LangChain tools | Checkpointer | None | LangServe | LangServe |
| CrewAI | No — full crew | Role-based | CrewAI tools | Short-term only | None | No | You build it |
| AutoGen | No — conversation | Speaker selection | Function calling | Chat history | None | No | You build it |
| Pydantic AI | Partial | None | Function tools | None | None | result.stream() | Your framework |
Packages
| Package | Description | |
|---|---|---|
nerva-py |
Python implementation (3.11+) | |
@otomus/nerva |
TypeScript implementation (Node 20+) | |
@otomus/nerva-cli |
CLI for scaffolding and code generation |
Links
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file otomus_nerva-0.2.1.tar.gz.
File metadata
- Download URL: otomus_nerva-0.2.1.tar.gz
- Upload date:
- Size: 169.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a280f42ade49a89a5a133b6ccbf53e89e80da94a35c26f14ee712381b5479c99
|
|
| MD5 |
655421b311395ff048d2eee5ae708dec
|
|
| BLAKE2b-256 |
ef28663241482f26bfe4e225debac9197aa4ac363f499bcb45033c356583345f
|
Provenance
The following attestation bundles were made for otomus_nerva-0.2.1.tar.gz:
Publisher:
release-py.yml on otomus/nerva
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
otomus_nerva-0.2.1.tar.gz -
Subject digest:
a280f42ade49a89a5a133b6ccbf53e89e80da94a35c26f14ee712381b5479c99 - Sigstore transparency entry: 1214766479
- Sigstore integration time:
-
Permalink:
otomus/nerva@5380b97eb057d69579173d68d760f54a48ab8646 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/otomus
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release-py.yml@5380b97eb057d69579173d68d760f54a48ab8646 -
Trigger Event:
push
-
Statement type:
File details
Details for the file otomus_nerva-0.2.1-py3-none-any.whl.
File metadata
- Download URL: otomus_nerva-0.2.1-py3-none-any.whl
- Upload date:
- Size: 132.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0064b1209703ef52a816bdc41f4bda77e742046a9bbe019d35a14b67313cda46
|
|
| MD5 |
ed3d60772d2307e734f6c30b29010123
|
|
| BLAKE2b-256 |
53b89d65e73d0b3b5089b03ced5e64d2e215c1cb5eb9a94b7d4b3dd3223df8c8
|
Provenance
The following attestation bundles were made for otomus_nerva-0.2.1-py3-none-any.whl:
Publisher:
release-py.yml on otomus/nerva
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
otomus_nerva-0.2.1-py3-none-any.whl -
Subject digest:
0064b1209703ef52a816bdc41f4bda77e742046a9bbe019d35a14b67313cda46 - Sigstore transparency entry: 1214766520
- Sigstore integration time:
-
Permalink:
otomus/nerva@5380b97eb057d69579173d68d760f54a48ab8646 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/otomus
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release-py.yml@5380b97eb057d69579173d68d760f54a48ab8646 -
Trigger Event:
push
-
Statement type: