Skip to main content

Agentic Execution Protocol™ (AEP™) - trust & safety infrastructure for AI agents

Project description

aceteam-aep — SafeClaw Gateway

PyPI AEP Safe

AceTeam™ trust & safety infrastructure for AI agents. The Agentic Execution Protocol™ (AEP™) adds cost tracking, safety detection, and enforcement to any LLM-powered tool — zero code changes required.

The gateway runs a single process on one port with three interfaces:

Path What it does
/v1/* OpenAI-compatible reverse proxy with safety enforcement
/dashboard/ Dashboard — cost, signals, policy controls, setup wizard
/mcp/ MCP tools for Claude Code and any MCP client

Installation

pip install aceteam-aep[all]                       # Everything (recommended)
pip install aceteam-aep[proxy,safety]              # Proxy + ML detectors (PII, content safety)
pip install aceteam-aep[proxy,custom-policies]     # Proxy + natural-language policies (PAW)
pip install aceteam-aep[proxy]                     # Lightweight proxy: regex + threshold detectors only
pip install aceteam-aep                            # Core: cost tracking + regex safety

The safety extra pulls in transformers + torch (ML detectors). The custom-policies extra pulls in programasweights (natural-language policy compilation, large native build). Both are opt-in so a default [proxy] install stays slim.

Quick Start

# Install and start the gateway
pip install aceteam-aep[all]
aceteam-aep proxy

The gateway prints three URLs on startup:

  SafeClaw Gateway
  ───────────────────────────────────
  LLM Proxy:  http://localhost:8899/v1
  Dashboard:  http://localhost:8899/dashboard/
  MCP:        http://localhost:8899/mcp/

Open the dashboard — a setup wizard appears on first visit and walks you through pointing your agent at the proxy or configuring Claude Code.

Point an agent at the gateway:

export OPENAI_BASE_URL=http://localhost:8899/v1
export OPENAI_API_KEY=sk-your-key
openclaw run "analyze these financial statements"

Open http://localhost:8899/dashboard/ — every LLM call appears in real-time with cost, safety signals, and enforcement decisions.

The proxy intercepts both directions:

  • Incoming requests — blocks dangerous prompts before they reach the API
  • Outgoing responses — blocks PII, toxic content, and cost anomalies before the agent sees them

Works with OpenClaw, LangChain, CrewAI, curl, or any tool that calls the OpenAI API.

What the Proxy Sees

The proxy is a reverse proxy (man-in-the-middle by design). It reads the full request AND full response. It can block in either direction.

Your Agent
    │
    ├─── REQUEST ──────────────────────────────┐
    │    messages: [user prompt, tool results]  │
    │                                           ▼
    │                                    ┌─────────────┐
    │                                    │  AEP Proxy   │
    │                                    │              │
    │                                    │  ✓ Input     │──── if dangerous ──→ BLOCK (never reaches API)
    │                                    │    text      │
    │                                    │              │──── if safe ──→ forward to OpenAI
    │                                    │              │
    │                                    │  ✓ Output    │──── if PII/toxic ──→ BLOCK (agent never sees it)
    │                                    │    text      │
    │                                    │              │──── if safe ──→ return to agent
    │                                    │  ✓ Cost      │
    │                                    │  ✓ Tool calls│
    │                                    └─────────────┘
    │                                           │
    ◄─── RESPONSE ─────────────────────────────┘
         assistant message, token usage
Data Proxy Sees It? Details
User messages (input text) Yes Full message array from request body
LLM response (output text) Yes Full response including all choices
Tool call requests Yes What functions the LLM asks to call
Tool call results Yes Included in next request's messages
Token usage + cost Yes From response usage field
Agent actions between calls No File writes, code execution, browser actions happen inside the agent, not via the LLM API
Application context No Who is calling, data classification — unless sent via X-AEP-* headers

The proxy sees every word going to and from the LLM. It cannot see what the agent does between LLM calls. For that, use the SDK (Layer 2).

Two Layers: Proxy + SDK

Think WireGuard + Tailscale. WireGuard is a minimal wire protocol. Tailscale adds identity and management on top. Same here:

Layer 1 — AEP Proxy (free, zero code changes)

  • Sees all LLM traffic (input, output, tool calls, cost)
  • Runs safety detectors, enforces PASS/FLAG/BLOCK
  • Dashboard at /dashboard/
  • Works with any language, any framework

Layer 2 — AEP SDK (application-level context)

  • Adds identity: X-AEP-Entity: org:acme
  • Adds governance: X-AEP-Classification: confidential
  • Adds provenance: citation chains, source tracking
  • Via HTTP headers through the proxy, or via Python wrap()

Layer 1 gets developers in the door. Layer 2 is what enterprises need for compliance.

Python SDK — Wrap Your Existing Client

import openai
from aceteam_aep import wrap

client = wrap(openai.OpenAI())

# Use exactly as before — AEP intercepts transparently
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}],
)

# AEP tracks everything
print(client.aep.cost_usd)            # $0.000150
print(client.aep.enforcement.action)   # "pass"
print(client.aep.safety_signals)       # []
client.aep.print_summary()             # Colored CLI output

Works with OpenAI, Anthropic, and any OpenAI-compatible client. Sync and async.

import anthropic
from aceteam_aep import wrap

client = wrap(anthropic.Anthropic())
# Same API — client.aep.cost_usd, client.aep.safety_signals, etc.

Safety Signals

Every LLM call is evaluated by pluggable safety detectors:

Detector What It Catches Model
PII SSN, email, phone, credit cards in input AND output iiiorg/piiranha-v1-detect-personal-information (~110M)
Content Safety Toxic, harmful, or unsafe content s-nlp/roberta_toxicity_classifier (~125M)
Agent Threat Port scans, subprocess execution, reverse shells, credential access, destructive commands Regex patterns (11 patterns)
Cost Anomaly Spend spikes >5x session average Statistical (no model)

Models lazy-load on first use, run on CPU. PII falls back to regex if transformers not installed.

Pre-flight Blocking

wrap() runs detectors on the input before making the API call. If a detector returns a HIGH severity signal that the enforcement policy would block, the request never reaches the LLM. Cost: $0.

from aceteam_aep import wrap, AepPreflightBlock

client = wrap(openai.OpenAI())
try:
    response = client.chat.completions.create(...)
except AepPreflightBlock as e:
    print(f"Blocked before API call: {e}")
    # e.decision.reason has the details

Configurable Enforcement Policy

client = wrap(openai.OpenAI(), policy={
    "default_action": "flag",
    "detectors": {
        "pii": {"action": "block", "threshold": 0.8},
        "agent_threat": {"action": "block"},
        "cost_anomaly": {"action": "pass", "multiplier": 10},
    },
})

Or from a YAML file: wrap(client, policy="aep-policy.yaml")

Enforcement: PASS / FLAG / BLOCK

Every call produces an enforcement decision based on signal severity:

  • PASS — No signals or low severity. Safe to proceed.
  • FLAG — Medium severity. Route to human review.
  • BLOCK — High severity (PII, toxic content). Prevent delivery.
client = wrap(openai.OpenAI())
response = client.chat.completions.create(...)

match client.aep.enforcement.action:
    case "pass":
        return response
    case "flag":
        queue_for_review(response)
    case "block":
        return reject(client.aep.enforcement.reason)

Custom Detectors

from aceteam_aep import wrap
from aceteam_aep.safety.base import SafetySignal

class MyDetector:
    name = "my_detector"

    def check(self, *, input_text, output_text, call_id, **kwargs):
        if "secret" in output_text.lower():
            return [SafetySignal(
                signal_type="data_leak",
                severity="high",
                call_id=call_id,
                detail="Potential secret in output",
            )]
        return []

client = wrap(openai.OpenAI(), detectors=[MyDetector()])

Governance Headers

Inject governance context via HTTP headers (any language, any framework):

curl http://localhost:8080/v1/chat/completions \
  -H "X-AEP-Entity: org:acme-corp" \
  -H "X-AEP-Classification: confidential" \
  -H "X-AEP-Consent: gdpr=granted,training=no" \
  -H "X-AEP-Budget: 5.00" \
  -H "X-AEP-Trace-ID: trace-abc123"

The proxy parses these headers, strips them before forwarding to the LLM (governance context never leaks to the provider), and includes classification and trace ID in the response headers.

Docker Sidecar

For containerized agents (NanoClaw, CrewAI, DeerFlow, OpenClaw, NemoClaw):

services:
  aep-proxy:
    image: ghcr.io/aceteam-ai/aep-proxy:latest
    ports: ["8899:8899"]
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
  agent:
    image: your-agent:latest
    environment:
      OPENAI_BASE_URL: http://aep-proxy:8899/v1

One env var. Zero code changes. The agent doesn't know AEP exists.

Tested with NVIDIA NemoClaw/OpenShell: Agent threats (port scanning, subprocess execution) blocked at the proxy before reaching the LLM. Normal calls pass through with receipts. See aep-quickstart for the full NemoClaw demo.

Claude Code Integration

Add the gateway as an MCP server in your Claude Code config:

{
  "mcpServers": {
    "aceteam": {
      "type": "streamable-http",
      "url": "http://localhost:8899/mcp/"
    }
  }
}

This gives Claude four tools: check_safety, get_safety_status, set_safety_policy, and get_cost_summary. All tools share live state with the proxy — safety checks via MCP appear in the dashboard and affect traffic enforcement.

See docs/engineering/mcp-integration.md for full tool reference.

Dashboard

Two views — toggle between Developer and Executive:

Developer: Individual calls, safety signals, cost per call, governance context, call timeline.

Executive: Enforcement coverage %, threats blocked, compliance status (PII/threats/toxicity/anomalies), safety breakdown, cost attribution by entity.

Policy controls: Per-detector checkboxes and per-category Trust Engine toggles — adjust enforcement without restarting. The master safety toggle in the header disables all detectors instantly.

Setup wizard: Shows on first visit (zero calls). Guides through API key configuration and agent setup — provides the OPENAI_BASE_URL export command and Claude Code MCP config to copy.

client.aep.serve_dashboard()  # http://localhost:8899

Dark-themed local web UI. Auto-refreshes every 2 seconds.

CLI Output

client.aep.print_summary()
──────────────────────────────────────────────────
  AEP Session Summary
──────────────────────────────────────────────────
  Calls:  5
  Cost:   $0.004200
  Safety: PASS
──────────────────────────────────────────────────

Agent Loop (Advanced)

For building agents from scratch with full AEP compliance:

from aceteam_aep import create_client, run_agent_loop, ChatMessage, tool

client = create_client("gpt-4o", api_key="sk-...")

@tool
def search(query: str) -> str:
    """Search for information."""
    return f"Results for: {query}"

result = await run_agent_loop(
    client,
    [ChatMessage(role="user", content="Search for AEP protocol")],
    tools=[search],
    system_prompt="You are a helpful assistant.",
)

Workshop Guide

Step-by-step setup in 5 minutes — from install to safety signals firing:

docs/workshop-guide.md

Covers: proxy setup, routing agents (Python/OpenClaw/curl), triggering safety signals, governance headers, custom detectors. Works for workshops, onboarding, or self-guided evaluation.

Providers

  • OpenAI (GPT-4o, GPT-5, o1, o3)
  • Anthropic (Claude Opus, Sonnet, Haiku)
  • Google (Gemini 2.5, 3.0)
  • xAI (Grok)
  • Ollama (local models)
  • OpenAI-compatible (SambaNova, TheAgentic, DeepSeek)

Safety Badge

Add this badge to your repo's README to show it uses AEP safety enforcement:

[![AEP Safe](https://img.shields.io/badge/AEP-Safe-brightgreen)](https://github.com/aceteam-ai/aceteam-aep)

Trademarks

"Agentic Execution Protocol," "AEP," and "AceTeam" are trademarks of AceTeam. The software is licensed under Apache 2.0. The trademark is not included in the license grant — you may not use these names to endorse or promote derivative works without written permission.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aceteam_aep-0.11.1.tar.gz (369.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aceteam_aep-0.11.1-py3-none-any.whl (181.8 kB view details)

Uploaded Python 3

File details

Details for the file aceteam_aep-0.11.1.tar.gz.

File metadata

  • Download URL: aceteam_aep-0.11.1.tar.gz
  • Upload date:
  • Size: 369.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.14 {"installer":{"name":"uv","version":"0.11.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for aceteam_aep-0.11.1.tar.gz
Algorithm Hash digest
SHA256 7a630edd4de8c1284334732f25d76fee0007057d0358bd8ae493326a7e74dbb0
MD5 d8c82c232e52e12b70a27f1c4ec1a1eb
BLAKE2b-256 a55b2f9487af80b7e9008f62d79a42c7d4f653c02e1555bca0e6ee88ba4ac13e

See more details on using hashes here.

File details

Details for the file aceteam_aep-0.11.1-py3-none-any.whl.

File metadata

  • Download URL: aceteam_aep-0.11.1-py3-none-any.whl
  • Upload date:
  • Size: 181.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.14 {"installer":{"name":"uv","version":"0.11.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for aceteam_aep-0.11.1-py3-none-any.whl
Algorithm Hash digest
SHA256 23c43fb1ac0093521db02567820045fec407950b74ada1a8cbf592cd849fc6d1
MD5 f464940c6f7e05fb3805d4e5798833c9
BLAKE2b-256 0f8e39de9e9fd7174ecad77424cdc8a3d96e63804b75fdc13aef9a94b108034c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page