A unified interface for streaming structured JSON from OpenAI, Anthropic, and Google Gemini.
Project description
LLM JSON Streaming
A unified Python library for streaming structured JSON outputs from OpenAI, Anthropic (Claude), and Google Gemini.
This library abstracts the differences between providers' structured output APIs and provides a consistent interface to stream JSON data and parsed Pydantic objects.
Features
- Unified Interface: Use a single API to interact with OpenAI, Anthropic, and Google Gemini.
- JSON Streaming: Access raw JSON chunks as they are generated (
delta). - Structured Outputs: Enforce schema validation using Pydantic models.
- Partial Parsing: Access accumulated JSON strings during streaming.
- Claude Structured Outputs: Automatically upgrades Claude Sonnet 4.5 / Opus 4.1 requests to Anthropic's JSON outputs for guaranteed schemas.
- Claude Prefill Strategy: Older Claude models avoid tool calls entirely—schema-aware prefilling keeps responses JSON-only while still streaming deltas. Includes JSON repair for partial object support.
- Google Gemini Support: Native structured outputs with JSON repair for enhanced partial object support.
Installation
📦 From PyPI (Recommended)
Install the package from PyPI using pip or uv:
# Using pip
pip install llm-json-streaming
# Using uv (recommended)
uv add llm-json-streaming
🧪 From Test PyPI
For testing pre-release versions:
# Using pip
pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.tw.martin98.com/simple/ llm-json-streaming
# Using uv
uv add --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.tw.martin98.com/simple/ llm-json-streaming
🛠️ From Source
Install from source for development:
# Clone the repository
git clone https://github.com/daniel-style/llm-json-streaming.git
cd llm-json-streaming
# Using uv (recommended)
uv sync
# Or using pip
pip install -e .
📋 Package Information
- PyPI: https://pypi.tw.martin98.com/project/llm-json-streaming/
- Test PyPI: https://test.pypi.org/project/llm-json-streaming/
- Current Version: 0.1.0
- Python: 3.9+
- Dependencies: Automatically installed
Configuration
Set your API keys in a .env file:
OPENAI_API_KEY=your_openai_api_key
OPENAI_BASE_URL=https://api.openai.com/v1
ANTHROPIC_API_KEY=your_anthropic_api_key
ANTHROPIC_BASE_URL=https://api.anthropic.com
GEMINI_API_KEY=your_gemini_api_key
GOOGLE_BASE_URL=https://generativelanguage.googleapis.com # Optional
Usage
🚀 Quick Start
Define your output schema using Pydantic and pass it to the provider:
import asyncio
import os
from pydantic import BaseModel
from llm_json_streaming import create_provider
# 1. Define your schema
class UserProfile(BaseModel):
name: str
age: int
bio: str
skills: list[str] = []
async def main():
# 2. Initialize provider using the factory
# Available: "openai", "anthropic", "claude", "google"
# Ensure environment variables are set, or pass api_key="..."
try:
# For Anthropic, you can optionally specify mode:
provider = create_provider("openai") # Use OpenAI
# provider = create_provider("anthropic", mode="auto") # Anthropic with auto-detection
# provider = create_provider("google") # Google Gemini
except ValueError as e:
print(f"Provider creation error: {e}")
return
prompt = "Generate a profile for a fictional software engineer."
# 3. Stream results
print("🔄 Streaming JSON...")
try:
async for chunk in provider.stream_json(prompt, UserProfile):
# Real-time partial parsed object (recommended for streaming updates)
if "partial_object" in chunk:
obj = chunk["partial_object"]
# Handle both dict and Pydantic objects
if hasattr(obj, 'name'): # Pydantic object
name = obj.name or "..."
age = obj.age if obj.age else "?"
else: # Dict object
name = obj.get('name', "...")
age = obj.get('age', "?")
print(f"\r📝 Current: {name}, Age: {age}", end="", flush=True)
# Final parsed object (complete and validated)
if "final_object" in chunk:
final_profile = chunk["final_object"]
print(f"\n\n✅ Complete: {final_profile.name}, Age: {final_profile.age}")
print(f"📋 Bio: {final_profile.bio}")
if final_profile.skills:
print(f"🛠️ Skills: {', '.join(final_profile.skills)}")
break
except Exception as e:
print(f"\n❌ Error during streaming: {e}")
if __name__ == "__main__":
asyncio.run(main())
🔧 Advanced Usage
Multiple Providers Comparison
import asyncio
from llm_json_streaming import create_provider
from pydantic import BaseModel
class TaskResult(BaseModel):
title: str
status: str
priority: int
async def compare_providers():
providers = {
"OpenAI": create_provider("openai"),
"Anthropic": create_provider("anthropic", mode="auto"),
"Google": create_provider("google")
}
prompt = "Create a software development task with title, status, and priority"
results = {}
for name, provider in providers.items():
try:
async for chunk in provider.stream_json(prompt, TaskResult):
if "final_object" in chunk:
results[name] = chunk["final_object"]
print(f"✅ {name}: {results[name].title}")
break
except Exception as e:
print(f"❌ {name} failed: {e}")
return results
# Run comparison
# asyncio.run(compare_providers())
Error Handling & Type Safety
import asyncio
from llm_json_streaming import create_provider
from pydantic import BaseModel, ValidationError
class APIResponse(BaseModel):
success: bool
data: dict
error_message: str = ""
async def safe_streaming_example():
try:
provider = create_provider("anthropic") # Fallback provider
async for chunk in provider.stream_json(
"Process this user request",
APIResponse
):
if "partial_object" in chunk:
obj = chunk["partial_object"]
# Safe object handling
if isinstance(obj, dict):
# Handle dict objects
success = obj.get('success', False)
elif hasattr(obj, 'success'):
# Handle Pydantic objects
success = obj.success
else:
print("⚠️ Unexpected object type")
continue
# Process partial results...
if "final_object" in chunk:
final = chunk["final_object"]
print(f"✅ Final result: {final}")
break
except ValidationError as e:
print(f"❌ Schema validation error: {e}")
except Exception as e:
print(f"❌ Streaming error: {e}")
Streaming Interface
The stream_json() method yields dictionaries with different types of content during streaming:
Chunk Fields
partial_object: The current best parsed object. Available from the beginning of streaming in all modes:- Early stage: Returns partial dictionaries for incomplete JSON
- Later stage: Returns validated Pydantic model instances for complete/repairable JSON
delta: Raw text characters as they are generated by the LLM.final_object: The complete, validated Pydantic object when streaming finishes.partial_json: The current accumulated JSON text string.final_json: The complete JSON text string when streaming finishes.
Recommended Usage Pattern
async for chunk in provider.stream_json(prompt, UserProfile):
# Use partial_object for real-time updates (recommended)
if "partial_object" in chunk:
user_profile = chunk["partial_object"]
# Available from the beginning - starts as dict, becomes Pydantic object
# Handle both types gracefully for consistent UI updates
if hasattr(user_profile, 'model_dump'):
# Pydantic model (complete/repairable JSON)
name = user_profile.name or "..."
else:
# Dictionary (incomplete JSON)
name = user_profile.get('name', "...")
update_ui(name) # Update UI with current best data
# Use final_object for the final result
if "final_object" in chunk:
final_profile = chunk["final_object"]
# Process the complete validated object
save_result(final_profile)
Supported Providers & Models
| Provider | Default Model | Method Used |
|---|---|---|
| OpenAI | gpt-4o-2024-08-06 |
response_format (Structured Outputs) via beta.chat.completions |
| Anthropic | claude-3-5-sonnet-20240620 (auto-switches to Structured Outputs for claude-sonnet-4.5* / claude-opus-4.1*) |
Prefill JSON streaming for legacy models, Structured Outputs (output_format + beta header) for Sonnet 4.5 / Opus 4.1 |
gemini-2.5-flash |
response_mime_type="application/json" with structured outputs via Google GenAI SDK |
Anthropic Mode Configuration
You can configure which strategy Anthropic models use through multiple methods:
Method 1: Constructor Mode (Recommended)
from llm_json_streaming import create_provider
# Force structured outputs mode
provider = create_provider("anthropic", mode="structured")
# Force prefill mode
provider = create_provider("anthropic", mode="prefill")
# Auto-detection based on model (default)
provider = create_provider("anthropic", mode="auto")
Method 2: Method Parameter Override
# Temporary override per request
async for chunk in provider.stream_json(prompt, UserProfile,
model="claude-3-5-sonnet-20240620",
use_structured_outputs=True):
# Uses structured outputs regardless of auto-detection
Mode Priority
- Constructor mode (
mode=parameter) - Highest priority - Method parameter (
use_structured_outputs=) - Medium priority - Auto-detection - Based on model capabilities - Lowest priority
Anthropic Structured Outputs
Claude Sonnet 4.5 and Claude Opus 4.1 support Anthropic's structured output beta. When using structured mode, chunks include partial JSON text and final Pydantic objects automatically.
Anthropic Prefill Mode
All other Claude models receive schema-derived instructions and an assistant prefill (e.g., { or {"field":) so they skip generic preambles and stream JSON directly—no tool definitions or tool-use deltas required.
Enhanced with multi-level partial object support:
- Real-time partial objects: Available from the first token, even with incomplete JSON
- Progressive improvement: Starts with partial dictionaries, upgrades to Pydantic objects when JSON becomes complete
- JSON repair: Automatically fixes incomplete JSON to enable better partial parsing
- Consistent interface: Behaves like structured outputs while maintaining backward compatibility
Google Gemini Support
Google Gemini models use the Google GenAI SDK with native structured outputs:
from llm_json_streaming import create_provider
provider = create_provider("google")
async for chunk in provider.stream_json(prompt, UserProfile, model="gemini-2.5-flash"):
# Handle streaming chunks
if "partial_object" in chunk:
print(chunk["partial_object"])
Key Features:
- Native Structured Outputs: Uses
response_mime_type="application/json"for guaranteed JSON responses - JSON Repair: Automatic repair of incomplete JSON for enhanced partial object support
- Schema Validation: Direct Pydantic schema integration for type-safe responses
- Streaming: Real-time partial objects with progressive enhancement
Configuration:
- Set
GEMINI_API_KEYenvironment variable (required) - Optionally set
GOOGLE_BASE_URLfor custom endpoints - Default model:
gemini-2.5-flash
Testing
Running Tests
To run the tests with uv:
# Run all tests
uv run pytest
# Run specific test file
uv run pytest tests/test_providers.py
# Run with coverage
uv run pytest --cov=llm_json_streaming
Quick Validation
Test the package installation and basic functionality:
# Using the test package
git clone https://github.com/daniel-style/llm-json-streaming.git
cd llm-json-streaming/test_package
# Test with uv
cd llm-test-project
uv add llm-json-streaming==0.1.0
uv run python basics_test.py
Troubleshooting
🔧 Common Issues
Installation Issues
Problem: ModuleNotFoundError: No module named 'llm_json_streaming'
# Solution: Install the package
pip install llm-json-streaming
# or
uv add llm-json-streaming
Problem: Dependency conflicts
# Solution: Use virtual environment
python -m venv myenv
source myenv/bin/activate # Windows: myenv\Scripts\activate
pip install llm-json-streaming
API Key Issues
Problem: Authentication errors
# Solution: Set environment variables
export OPENAI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
export GEMINI_API_KEY="your-key"
# Or create .env file
echo "OPENAI_API_KEY=your-key" > .env
Streaming Issues
Problem: No final_object received
- Cause: Provider might have returned incomplete JSON
- Solution: Check
partial_objectfor partial results and improve prompt clarity
Problem: Mixed object types (dict vs Pydantic)
# Solution: Handle both types safely
if "partial_object" in chunk:
obj = chunk["partial_object"]
if hasattr(obj, 'field_name'): # Pydantic object
value = obj.field_name
else: # Dict object
value = obj.get('field_name')
Provider-Specific Issues
OpenAI:
- Model:
gpt-4o-2024-08-06(default) - Rate limits: Check OpenAI API quotas
- Service issues: Check OpenAI Status
Anthropic:
- Model:
claude-3-5-sonnet-20240620(default) - Structured outputs: Available for Claude Sonnet 4.5+ and Opus 4.1+
- Mode selection:
auto,structured,prefill
Google Gemini:
- Model:
gemini-2.5-flash(default) - API key: Required, no free tier
- Regional availability: Check Google AI Studio
🚨 Error Codes Reference
| Error Code | Description | Solution |
|---|---|---|
| 401 | Invalid API key | Check environment variables |
| 429 | Rate limit exceeded | Wait and retry, or upgrade plan |
| 503 | Service unavailable | Try again later or switch provider |
| ValueError | Invalid provider name | Use: "openai", "anthropic", "claude", "google" |
📞 Getting Help
- Check the test results for known issues
- Review usage examples in the test package
- Open an issue on GitHub with:
- Python version
- Package version
- Error message
- Minimal reproduction code
- Check provider documentation:
Contributing
Pull requests are welcome! For major changes, please open an issue first to discuss what you would like to change.
Development Setup
# Clone and set up development environment
git clone https://github.com/daniel-style/llm-json-streaming.git
cd llm-json-streaming
# Using uv (recommended)
uv sync
# Or using pip
pip install -e ".[dev]"
Running Tests
# All tests
uv run pytest
# With coverage
uv run pytest --cov=llm_json_streaming
# Specific provider tests
uv run pytest tests/test_openai_integration.py
License
📚 Additional Resources
- PyPI Package: https://pypi.tw.martin98.com/project/llm-json-streaming/
- Test PyPI: https://test.pypi.org/project/llm-json-streaming/
- GitHub Repository: https://github.com/daniel-style/llm-json-streaming
- Issue Tracker: https://github.com/daniel-style/llm-json-streaming/issues
- Documentation: See inline code documentation and examples
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_json_streaming-0.1.1.tar.gz.
File metadata
- Download URL: llm_json_streaming-0.1.1.tar.gz
- Upload date:
- Size: 166.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2417d03ff6c6ed217bf28132d0881f57dfc4ee83b17bd07bbec2c7f4a3304b0b
|
|
| MD5 |
d431b4c1300063f0576e2f3359247d82
|
|
| BLAKE2b-256 |
fa0bb9fd318598c43aad5d7f0d75b8f0b423d60a8b4639ab36798d4459b3a93a
|
File details
Details for the file llm_json_streaming-0.1.1-py3-none-any.whl.
File metadata
- Download URL: llm_json_streaming-0.1.1-py3-none-any.whl
- Upload date:
- Size: 23.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
db1a2c543ddfdab6d9e7a6d459e996f6d3e21c9787c778271276a9ef069b328e
|
|
| MD5 |
51c1e737af299e5891c47c77f0c8f1d1
|
|
| BLAKE2b-256 |
9699c529c8e4f0fda1b902354484cfb0370d2a8fc570896a094ccfaa8102fef7
|