Skip to main content

NeuroFence SDK - framework-agnostic interception wrapper and API client

Project description

NeuroFence

AI Agent Safety System - real-time contamination detection & automatic isolation.

What you need to provide (keys / credentials)

  • Required: nothing if you use Docker Compose defaults.
  • Required (if using your own Postgres): DB_HOST, DB_PORT, DB_NAME, DB_USER, DB_PASSWORD (or DATABASE_URL).
  • Optional: OPENAI_API_KEY (only for future OpenAI-based extensions; current system works offline).

Security note: never put real keys in .env.example. Use a local .env file (gitignored).

Details: see docs/SECRETS.md.

Quick start (Windows)

Prerequisites

  • Python 3.9+
  • PostgreSQL 13+ (recommended for persistence)

Option A (recommended): Run everything with Docker

  1. Install Docker Desktop
  1. Start NeuroFence (API + Postgres)

One-command launcher (Windows):

cd c:\Users\Win11\Desktop\NeuroFence
./run-neurofence.cmd

This will:

  • build + start the Docker Compose stack
  • wait for GET /health to return {"status":"healthy"}
  • run pytest -q inside the API container

Alternatively, run Compose directly:

cd c:\Users\Win11\Desktop\NeuroFence
docker compose up --build
  1. Open the API

Notes:

  • DB is persisted in a Docker volume neurofence_pgdata.
  • Schema is created automatically on API startup.

Option B: Local Python + your local PostgreSQL

1) Create venv + install deps

cd c:\Users\Win11\Desktop\NeuroFence
py -m venv venv
.\venv\Scripts\activate
pip install -r requirements.txt

2) Configure environment

copy .env.example .env
# edit .env if your Postgres creds differ

3) Initialize DB

python init_db.py

4) Run API

python -m uvicorn backend.main:app --reload --port 8000

Health check:

5) Run demo

python examples\demo_complete.py

API endpoints

  • GET /health
  • POST /intercept
  • GET /stats
  • GET /forensics/{agent_name}
  • POST /isolate/{agent_name}
  • POST /release/{agent_name}
  • POST /update-baseline/{agent_name}

Example request:

curl -X POST http://localhost:8000/intercept \
  -H "Content-Type: application/json" \
  -d '{"sender":"agent_a","recipient":"agent_b","content":"hello"}'

Testing

The pytest suite uses an in-memory SQLite database and a fake embedding model (so it runs fast and does not download large ML models):

pytest -q

Framework integration (automatic interception)

For a full, step-by-step integration guide (recommended), see: docs/INTEGRATION.md.

Install the SDK (like a framework)

If you want to integrate NeuroFence into another Python project, install the SDK package:

cd C:\path\to\NeuroFence
pip install -e .

Optional CLI (after install):

# If your venv is activated:
neurofence health --url http://localhost:8000

# If you don't want to rely on PATH/venv activation:
python -m neurofence_sdk.cli health --url http://localhost:8000

Deploy anywhere (not just localhost)

NeuroFence is an HTTP service. You can run it on:

  • a VM/server and point your app to http://<server-ip>:8000
  • Kubernetes (Service/Ingress)
  • a Docker host (recommended for teams)

Once deployed, set the SDK base_url to that address (see docs/INTEGRATION.md).

NeuroFence can run as a standalone service and enforce interception as long as you integrate at an enforcement point.

Two universal patterns:

  1. Message-bus / send() wrapper (most reliable)
  • If your framework has any function/method that ultimately sends agent-to-agent messages, wrap it once.
  • Every message is checked via POST /intercept before delivery.
  1. LLM gateway (works across frameworks that all call the same LLM API)
  • Point your framework's LLM base URL at a gateway that calls NeuroFence before forwarding.
  • This protects messages that flow through the model call path, but it does not automatically cover out-of-band channels.

Drop-in send() wrapper (Python)

See examples/framework_agnostic_integration.py for a minimal example.

The wrapper lives in neurofence_sdk/guard.py:

from neurofence_sdk import wrap_send

def send_message(sender, recipient, content):
   ...

send_message = wrap_send(send_message, base_url="http://localhost:8000")

Notes

  • For production, keep ISOLATION_ENABLED=true and use PostgreSQL.
  • Baselines improve semantic anomaly detection; update per agent via POST /update-baseline/{agent_name}.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neurofence_sdk-1.0.1.tar.gz (6.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neurofence_sdk-1.0.1-py3-none-any.whl (7.0 kB view details)

Uploaded Python 3

File details

Details for the file neurofence_sdk-1.0.1.tar.gz.

File metadata

  • Download URL: neurofence_sdk-1.0.1.tar.gz
  • Upload date:
  • Size: 6.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for neurofence_sdk-1.0.1.tar.gz
Algorithm Hash digest
SHA256 37803c25c48e61cc504e994f5504d1903bffe1d222e74759691494af419d539f
MD5 46796d699ecee067bce331178cb57649
BLAKE2b-256 ec82ffbe82b29b018d608bae1e08484670bda2e25ee4c421b0100b07fce34c64

See more details on using hashes here.

File details

Details for the file neurofence_sdk-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: neurofence_sdk-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 7.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for neurofence_sdk-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 779461f732248bddc4efecef2709754cdbfb0d31bdd10411145d327d6d1cd2e8
MD5 986aa2ab3c8e76f4f92b923236684f16
BLAKE2b-256 15ea807d7d1be21e1cf2c9cb3c41b8242f300f84a298cc75e734a7552be766d6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page