Skip to main content

Multi-Agent Observability SDK for Agent Lighthouse

Project description

Agent Lighthouse SDK (Python)

The official Python client for instrumenting AI agents with Agent Lighthouse.

Features

  • Automatic Tracing: Decorators for agents, tools, and LLM calls.
  • Async Support: Fully compatible with async/await workflows.
  • State Management: Expose internal agent state (memory, context) for real-time inspection.
  • Token Tracking: Automatically capture token usage and costs from LLM responses.

Installation

Install from PyPI:

pip install agent-lighthouse

Or install from source in development mode:

cd sdk
pip install -e .

Quick Start

1. Initialize Tracer

from agent_lighthouse import LighthouseTracer

# Use your API Key (starts with lh_)
tracer = LighthouseTracer(api_key="lh_...")

2. Add Decorators

Wrap your functions with @trace_agent, @trace_tool, or @trace_llm.

from agent_lighthouse import trace_agent, trace_tool, trace_llm

@trace_tool("Web Search")
def search_web(query):
    # ... logic ...
    return results

@trace_llm("GPT-4", model="gpt-4-turbo", cost_per_1k_prompt=0.01)
def call_llm(prompt):
    # ... call OpenAI ...
    return response

@trace_agent("Researcher")
def run_research_agent(topic):
    data = search_web(topic)
    summary = call_llm(f"Summarize {data}")
    return summary

3. Run It

Just run your script as normal. The SDK will automatically send traces to the backend.

State Inspection

Allow humans to inspect and modify agent state during execution:

from agent_lighthouse import get_tracer

@trace_agent("Writer")
def writer_agent():
    tracer = get_tracer()
    
    # Expose state
    tracer.update_state(
        memory={"draft": "Initial draft..."},
        context={"tone": "Professional"}
    )
    
    # ... execution continues ...

Zero-Touch Auto-Instrumentation (Magic Import)

No code changes to your LLM calls. Just import once at the top of your script:

import agent_lighthouse.auto  # auto-instruments OpenAI, Anthropic, requests, and frameworks

This automatically captures:

  • LLM latency
  • Token usage
  • Cost (best-effort pricing)

Content capture is off by default. Enable if you explicitly want payloads:

export LIGHTHOUSE_CAPTURE_CONTENT=true

Configuration

You can configure the SDK via environment variables:

Variable Description Default
LIGHTHOUSE_API_KEY Your machine API key None
LIGHTHOUSE_BASE_URL URL of the backend API http://localhost:8000
LIGHTHOUSE_AUTO_INSTRUMENT Enable auto-instrumentation 1
LIGHTHOUSE_CAPTURE_CONTENT Capture request/response payloads false
LIGHTHOUSE_LLM_HOSTS Allowlist extra LLM hosts for requests instrumentation ""
LIGHTHOUSE_PRICING_JSON Pricing override JSON string ""
LIGHTHOUSE_PRICING_PATH Pricing override JSON file path ""
LIGHTHOUSE_DISABLE_FRAMEWORKS Disable framework adapters (csv) ""

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_lighthouse-0.4.0.tar.gz (21.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_lighthouse-0.4.0-py3-none-any.whl (20.5 kB view details)

Uploaded Python 3

File details

Details for the file agent_lighthouse-0.4.0.tar.gz.

File metadata

  • Download URL: agent_lighthouse-0.4.0.tar.gz
  • Upload date:
  • Size: 21.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for agent_lighthouse-0.4.0.tar.gz
Algorithm Hash digest
SHA256 e3169252ac45a30acae264f8ca6696b52f87a79f9ae7159ea4a97c3696768ff1
MD5 4283febe95399f06ceb017519cd5749d
BLAKE2b-256 b1e3b2937cb1a813080d465f498388397a745af60ec5f1f6b8d33e4d8df37ab0

See more details on using hashes here.

File details

Details for the file agent_lighthouse-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_lighthouse-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e4d16fb596b487c16ba6823b532030d51ce5f85b37061b6c7459e4a87d22d5f4
MD5 5db5ce9c217512f7602db1294a4d11c5
BLAKE2b-256 61b04c7b93b2de351d197bd1d20b1f60473abd850d802fe7313f45db3c6e1c7a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page