Internal Python SDK for injecting AI-native ads
Project description
@clad-ai/python
Table of Contents
- Overview
- Installation
- Python Version Support
- Quick Start
- Core Concepts
- Methods
- Framework Integration Examples
- Error Handling
- Support
Overview
Clad provides a lightweight Python SDK for secure, low-latency native ad injection in LLM workflows. Built for server-side applications, this SDK offers flexible processing modes to match your infrastructure needs.
Key Features:
- 🚀 Three processing modes for different performance and infrastructure needs
- 🐍 Python 3.8+ support with async/await
- 🏗️ Framework agnostic - Works with FastAPI, Flask, Django, and more
- 🔧 Production ready - Built-in Redis support, error handling, and fallbacks
⚠️ This SDK is proprietary and intended for authorized Clad Labs clients only.
Installation
pip install clad-ai-python
For Redis support (optional):
pip install redis
Python Version Support
- Python 3.8+: Full support with async/await
- Redis: Optional dependency for production scaling
Quick Start
from clad_sdk import CladClient
# Initialize client
clad = CladClient(
api_key="YOUR_API_KEY",
threshold=3 # Optional: messages before triggering API (default: 3)
)
# Process user input
response = await clad.get_processed_input(
user_input="I'm looking for running shoes",
user_id="user-123",
discrete="false"
)
print(response["prompt"]) # Final prompt with or without ad
Configuration Parameters:
api_key: str— API key provided by Clad. Contact support@clad.ai to get yours.threshold: int— Optional. Number of messages before triggering an API call. Defaults to 3.redis_client— Optional. Redis client forget_processed_input_with_redismethod.
Core Concepts
✅ Three modes of operation:
Each mode offers a different balance of speed, memory footprint, and infrastructure requirements:
get_processed_input: Fast with local server RAM for counting and contextget_processed_input_fully_managed: Zero local memory, fully server-managed stateget_processed_input_with_redis: Production-ready with Redis for scaling across servers
Methods
Important: All methods require a user_id parameter. In backend environments, this user ID should be passed from your frontend application (which can use the @clad-ai/react SDK's getOrCreateUserId() function). Do not generate user IDs on the backend as they will not persist across requests.
get_processed_input
Mode 1: Local Memory (Fast & Lightweight)
Uses in-process TTL cache for ultra-fast message counting and context tracking. Ideal for single-server deployments.
clad = CladClient(api_key="YOUR_API_KEY")
response = await clad.get_processed_input(
user_input="I'm looking for shoes",
user_id="user-123",
discrete="false"
)
Parameters:
user_input: str— User's chat inputuser_id: str— Persistent user ID (from frontend)discrete: str— "true" or "false" for explicit ad marking
Returns:
{
"prompt": str,
"promptType": "clean" | "injected",
"link": str,
"discrete": "true" | "false",
"adType": str,
"image_url": Optional[str]
}
get_processed_input_fully_managed
Mode 2: Zero Memory (Fully Server-Managed)
No local memory usage. All counting and injection logic handled by Clad's backend. Adds slight network latency but requires zero local state.
response = await clad.get_processed_input_fully_managed(
user_input="Looking for cafes",
user_id="user-123",
discrete="false",
threshold=5 # Optional: override threshold
)
Parameters:
- Same as
get_processed_input, plus: threshold: int— Optional threshold override for this request
get_processed_input_with_redis
Mode 3: Production-Ready (Redis-Enhanced)
🏆 Recommended for production environments
Uses Redis for persistent, scalable state management. Perfect for multi-server deployments with centralized state.
import redis.asyncio as aioredis
# Setup Redis
r = aioredis.from_url("redis://localhost:6379/0")
clad = CladClient(api_key="YOUR_API_KEY", redis_client=r)
# Use Redis-enhanced processing
response = await clad.get_processed_input_with_redis(
user_input="Book a hotel",
user_id="user-123",
discrete="false"
)
Parameters:
Same as get_processed_input.
Framework Integration Examples
FastAPI
from fastapi import FastAPI
from clad_sdk import CladClient
app = FastAPI()
clad = CladClient(api_key="YOUR_API_KEY")
@app.post("/api/chat")
async def chat(request: dict):
message = request["message"]
user_id = request["userId"]
result = await clad.get_processed_input(message, user_id)
# Send to your LLM
llm_response = await your_llm.generate(result["prompt"])
return {
"response": llm_response,
"hasAd": result["promptType"] == "injected"
}
Flask (with async support)
from flask import Flask, request, jsonify
from clad_sdk import CladClient
import asyncio
app = Flask(__name__)
clad = CladClient(api_key="YOUR_API_KEY")
@app.route("/api/chat", methods=["POST"])
def chat():
data = request.json
# Run async function in sync context
result = asyncio.run(clad.get_processed_input_fully_managed(
data["message"],
data["userId"]
))
return jsonify({"processedPrompt": result["prompt"]})
Django (async views)
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from clad_sdk import CladClient
import json
clad = CladClient(api_key="YOUR_API_KEY")
@csrf_exempt
async def chat_view(request):
if request.method == "POST":
data = json.loads(request.body)
result = await clad.get_processed_input(
data["message"],
data["userId"]
)
return JsonResponse({
"prompt": result["prompt"],
"hasAd": result["promptType"] == "injected"
})
Error Handling
The SDK provides comprehensive error handling with graceful fallbacks:
# All methods return clean prompts on error
response = await clad.get_processed_input_fully_managed(message, user_id)
# Check for errors
if "_error" in response:
print(f"API Error: {response['_error']['message']}")
# Response still contains fallback clean prompt
# Redis method error handling
try:
response = await clad.get_processed_input_with_redis(message, user_id)
except RuntimeError as e:
if "Redis client not configured" in str(e):
# Handle Redis configuration error
pass
Error Response Format:
{
"prompt": str, # Always present (fallback to original input)
"promptType": "clean", # Always "clean" on error
"link": "", # Empty on error
"discrete": "false", # Default value
"_error": { # Present when error occurs
"status": int, # HTTP status code (if applicable)
"message": str # Error description
}
}
Support
For help, email us at support@clad.ai
This software is proprietary and confidential. Unauthorized copying, distribution, or use is strictly prohibited without express written permission from Clad Labs.
© 2025 Clad Labs. All rights reserved.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file clad_ai_python-0.1.5.tar.gz.
File metadata
- Download URL: clad_ai_python-0.1.5.tar.gz
- Upload date:
- Size: 7.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7161223730a17787fc487d4a86138dbcd10640fbaa21e6de07f209aaa426c8e7
|
|
| MD5 |
6801711cee886973755d192a54369910
|
|
| BLAKE2b-256 |
be6a03abbe692ea850926bbd7beea521eba5398f0a7050c5f961fb655611df5a
|
File details
Details for the file clad_ai_python-0.1.5-py3-none-any.whl.
File metadata
- Download URL: clad_ai_python-0.1.5-py3-none-any.whl
- Upload date:
- Size: 6.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fba9289064bf53d4cfd1ac14a43c8f1e561d78ab9fd62d451cf48a3c05bd0bf6
|
|
| MD5 |
38bf009949410e37ac4881fa378cbaf3
|
|
| BLAKE2b-256 |
23d66b648f0b719c9e78e805724e47f8465d0a97ad836fd066d80935498a54b8
|