Skip to main content

Official Python SDK for Brizz platform

Project description

Brizz SDK

Python Version License

Brizz observability SDK for AI applications.

Installation

pip install brizz
# or
uv add brizz
# or
poetry add brizz

Quick Start

from brizz import Brizz

# Initialize
Brizz.initialize(
    api_key='your-brizzai-api-key',
    app_name='my-app',
)

Important: Initialize Brizz before importing any libraries you want to instrument (e.g., OpenAI). If using dotenv, use from dotenv import load_dotenv; load_dotenv() before importing brizz.

Session Tracking

Group related operations and traces under a session context. Brizz provides two approaches:

Context Manager Approach (Recommended)

from brizz import start_session, astart_session

# Basic usage - all telemetry tagged with session ID
with start_session('session-123'):
    # All traces, events, and spans within this block
    # will be tagged with session.id = session-123
    response = openai.chat.completions.create(
        model='gpt-4',
        messages=[{'role': 'user', 'content': 'Hello'}]
    )
    emit_event('user.action', {'action': 'chat'})

# Enhanced usage - capture session object for custom properties
with start_session('session-456') as session:
    # Update properties using keyword arguments
    session.update_properties(user_id='user-123', model='gpt-4')

    # Or use a dictionary
    session.update_properties({'retry_count': 3, 'success': True})

    # Or combine both
    session.update_properties({'version': '1.0'}, environment='production')


    # Make LLM call
    response = openai.chat.completions.create(
        model='gpt-4',
        messages=[{'role': 'user', 'content': 'Hello'}]
    )

# Optional: Manual input/output tracking
# Use when you need to format or extract specific data for tracking
with start_session('session-789') as session:
    # Example: Extract user query from structured request
    request_data = {"query": "What's the weather?", "context": {...}}
    session.set_input(request_data["query"])  # Track just the query

    # Send full structured data to LLM
    response = openai.chat.completions.create(
        model='gpt-4',
        messages=[{'role': 'user', 'content': json.dumps(request_data)}]
    )

    # Example: Extract answer field from JSON response
    response_json = json.loads(response.choices[0].message.content)
    session.set_output(response_json["answer"])  # Track just the answer

# Async version
async def process_user_workflow():
    async with astart_session('session-999') as session:
        session.update_properties(user_id='user-456')

        response = await openai.chat.completions.create(
            model='gpt-4',
            messages=[{'role': 'user', 'content': 'Hello'}]
        )
        return response

# With additional properties
with start_session('session-999', {'user_id': 'user-789', 'region': 'us-east'}):
    # All telemetry includes session.id, user_id, and region
    emit_event('purchase', {'amount': 99.99})

Session Methods:

  • session.update_properties(**kwargs) - Update custom properties on session span (stored as brizz.{key})
  • session.set_input(text, **kwargs) - Optional: Manually record user input; kwargs attach per-turn metadata rendered in the dashboard's Context panel
  • session.set_output(text, **kwargs) - Optional: Manually record AI output; kwargs attach per-turn metadata rendered in the dashboard's Context panel
  • session.set_title(text) - Set a session title (typically used with mode='title')

Per-turn context example:

with start_session("session-123") as session:
    session.set_input("Why is my bill high?", selected_invoice="INV-9182")
    reply = openai.chat.completions.create(...)
    session.set_output(
        reply.choices[0].message.content,
        message_id="msg-42",
        sources=["doc-abc"],
    )

Note:

  • set_input() and set_output() are optional - use them only when you need manual formatting
  • Multiple calls to set_input()/set_output() are supported - values are accumulated in arrays and serialized as JSON strings
  • LLM calls are automatically traced; manual input/output tracking is for cases where the raw data needs formatting

Session Title Generation

If you use an LLM call to generate session titles, wrap it so those spans don't appear as part of the conversation:

from brizz import start_session, start_session_title

with start_session('session-123') as session:
    response = openai.chat.completions.create(...)

    # Title generation — excluded from conversation view
    with start_session_title() as title:
        generated = openai.chat.completions.create(
            model='gpt-4',
            messages=[{'role': 'user', 'content': 'Summarize this chat in 3 words'}]
        )
        title.set_title(generated.choices[0].message.content)

# Or use mode='title' on start_session directly
with start_session('session-123', mode='title') as session:
    title = openai.chat.completions.create(...)
    session.set_title(title.choices[0].message.content)

# Or use start_session_title outside a session (pass session_id explicitly)
with start_session_title(session_id='session-123') as title:
    title.set_title("My Title")

Accessing the Active Session

Use get_active_session() to retrieve the current session from anywhere within a start_session scope — no need to pass the session object through your call stack:

from brizz import start_session, get_active_session

def deep_helper():
    session = get_active_session()
    if session:
        session.update_properties(step='helper')

with start_session('session-123'):
    deep_helper()  # accesses session without it being passed as a parameter

# Outside a session, returns None
get_active_session()  # None

Function Wrapper Approach

from brizz import with_session_id, awith_session_id

# Wrap synchronous functions
def sync_workflow(chat_id: str, data: dict):
    return with_session_id(chat_id, process_data, data)

# Wrap async functions
async def process_user_workflow(chat_id):
    response = await awith_session_id(
        chat_id,
        openai.chat.completions.create,
        model='gpt-4',
        messages=[{'role': 'user', 'content': 'Hello'}]
    )
    return response

Custom Properties

Add custom properties to telemetry context. These properties will be attached to all traces, spans, and events within the scope:

Context Manager Approach (Recommended)

from brizz import custom_properties, acustom_properties

# Synchronous context manager
with custom_properties({'user_id': '123', 'experiment': 'variant-a'}):
    # All telemetry here includes user_id and experiment
    emit_event('api.request', {'endpoint': '/users'})
    response = call_external_api()

# Async context manager
async def process_with_context():
    async with acustom_properties({'team_id': 'abc', 'region': 'us-east'}):
        # All telemetry includes team_id and region
        result = await async_operation()
        return result

# Nested contexts (properties are merged)
with custom_properties({'tenant_id': 'tenant-1'}):
    with custom_properties({'request_id': 'req-456'}):
        # Both tenant_id and request_id are available
        emit_event('data.access')

Function Wrapper Approach

from brizz import with_properties, awith_properties

# Sync usage
result = with_properties(
    {'user_id': '123', 'experiment': 'variant-a'},
    my_function,
    arg1, arg2
)

# Async usage
result = await awith_properties(
    {'team_id': 'abc', 'region': 'us-east'},
    my_async_function,
    arg1, arg2
)

Event Examples

from brizz import emit_event

emit_event('user.signup', {'user_id': '123', 'plan': 'pro'})
emit_event('user.payment', {'amount': 99, 'currency': 'USD'})

Deployment Environment

Optionally specify the deployment environment for better filtering and organization:

Brizz.initialize(
    api_key='your-api-key',
    app_name='my-app',
    environment='production',  # Optional: 'dev', 'staging', 'production', etc.
)

Environment Variables

BRIZZ_API_KEY=your-api-key                  # Required
BRIZZ_BASE_URL=https://telemetry.brizz.dev  # Optional
BRIZZ_APP_NAME=my-app                       # Optional
BRIZZ_ENVIRONMENT=production                # Optional: deployment environment (dev, staging, production)
BRIZZ_DISABLE_SPAN_EXPORTER=true            # Optional: disable span export (see below)

Disable Span Export

Keep Brizz.initialize() in your code without sending any spans — useful for dev/test environments. When enabled, the SDK skips exporter, processor, and TracerProvider setup entirely; spans become no-ops via OpenTelemetry's default tracer.

Brizz.initialize(api_key='your-api-key', disable_span_exporter=True)

Or via env var: BRIZZ_DISABLE_SPAN_EXPORTER=true.

PII Masking

Automatically protects sensitive data in traces:

# Option 1: Enable default masking (simple)
Brizz.initialize(
    api_key='your-api-key',
    masking=True,  # Enables all built-in PII patterns
)

# Option 2: Custom masking configuration
from brizz import Brizz, MaskingConfig, SpanMaskingConfig, AttributesMaskingRule

Brizz.initialize(
    api_key='your-api-key',
    masking=MaskingConfig(
        span_masking=SpanMaskingConfig(
            rules=[
                AttributesMaskingRule(
                    attribute_pattern=r'gen_ai\.(prompt|completion)',
                    mode='partial',  # 'partial' or 'full'
                    patterns=[r'sk-[a-zA-Z0-9]{32}'],  # Custom regex patterns
                ),
            ],
        ),
    ),
)

Built-in patterns: emails, phone numbers, SSNs, credit cards, API keys, crypto addresses, and more. Use masking=True for defaults or MaskingConfig for custom rules.

Instrumentation Control

By default, Brizz automatically instruments AI libraries and blocks HTTP clients (urllib, urllib3, requests, httpx, aiohttp_client) to prevent noise. You can customize which instrumentations to block:

Brizz.initialize(api_key="your-api-key")

# Block specific instrumentations (replaces defaults)
Brizz.initialize(
    api_key="your-api-key",
    blocked_instrumentations=["urllib", "requests", "httpx", "openai"]  # Custom list
)

# Enable all instrumentations (including HTTP clients)
Brizz.initialize(
    api_key="your-api-key",
    blocked_instrumentations=[]  # Empty list = block nothing
)

Langfuse Integration

Brizz runs alongside Langfuse without conflicts. However, if you want to avoid Brizz spans reaching Langfuse (or vice versa), you can disable Brizz instrumentation:

from brizz import Brizz

# Disable Brizz instrumentation to prevent spans from crossing between systems
Brizz.initialize(api_key="your-api-key", allowed_instrumentations=[])

# Now use Langfuse - only Langfuse will instrument your code
from langfuse import Langfuse
langfuse = Langfuse()

Manual Input/Output in Langfuse

When using Langfuse, you can add manual input/output at the trace level. Brizz automatically extracts and displays this data in the conversation view:

from langfuse import Langfuse

langfuse = Langfuse()

# Create trace with manual input/output
trace = langfuse.trace(
    name="my-trace",
    input={"question": "What is 2+2?"},  # {"question": "What is 2+2?"} Will be shown as user message
    output={"answer": "The answer is 4"}  # {"answer": "The answer is 4"} Will be shown as assistant message
)

# Or use brizz.input / brizz.output keys for specific extraction
trace = langfuse.trace(
    name="my-trace",
    input={"brizz.input": "What is 2+2?", "context": {...}},  # Only brizz.input shown
    output={"brizz.output": "4", "metadata": {...}}  # Only brizz.output shown
)

See examples/langfuse_only_example.py for complete examples.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brizz-0.1.20.tar.gz (65.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

brizz-0.1.20-py3-none-any.whl (60.7 kB view details)

Uploaded Python 3

File details

Details for the file brizz-0.1.20.tar.gz.

File metadata

  • Download URL: brizz-0.1.20.tar.gz
  • Upload date:
  • Size: 65.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.13 {"installer":{"name":"uv","version":"0.11.13","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for brizz-0.1.20.tar.gz
Algorithm Hash digest
SHA256 a32cb9a4fa3536b95fbd59259b58399df6453b2b729ecfff924ea9f55d7d59d4
MD5 4b2b5daf7831522c7886cfb8eec5891e
BLAKE2b-256 0a708a2b4819c6b1efd36d4381e77b3366e191c43443d6725f41abda564600eb

See more details on using hashes here.

File details

Details for the file brizz-0.1.20-py3-none-any.whl.

File metadata

  • Download URL: brizz-0.1.20-py3-none-any.whl
  • Upload date:
  • Size: 60.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.13 {"installer":{"name":"uv","version":"0.11.13","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for brizz-0.1.20-py3-none-any.whl
Algorithm Hash digest
SHA256 cfa7f797bcf3665d9595d193121b0ad675807a5f885186d72d9080a1a2e72a48
MD5 e22bd5eec2c31efa63d9fc894a49ba03
BLAKE2b-256 5ce554b8d96378e787f4d49e39d6efe10f8aa9ba8b3e02446089a70fc3efe9ef

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page