Skip to main content

Library to easily interface with LLM API providers

Project description

๐Ÿš… LiteLLM

Call all LLM APIs using the OpenAI format [Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.]

Schedule Demo ยท Feature Request

PyPI Version CircleCI Y Combinator W23 Whatsapp Discord

Docs 100+ Supported Models Demo Video

LiteLLM manages

  • Translating inputs to the provider's completion and embedding endpoints
  • Guarantees consistent output, text responses will always be available at ['choices'][0]['message']['content']
  • Exception mapping - common exceptions across providers are mapped to the OpenAI exception types.

10/05/2023: LiteLLM is adopting Semantic Versioning for all commits. Learn more
10/16/2023: Self-hosted OpenAI-proxy server Learn more

Usage

Open In Colab
pip install litellm
from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-openai-key" 
os.environ["COHERE_API_KEY"] = "your-cohere-key" 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response. Streaming is supported for OpenAI, Azure, Anthropic, Huggingface models

response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for chunk in response:
    print(chunk['choices'][0]['delta'])

# claude 2
result = completion('claude-2', messages, stream=True)
for chunk in result:
  print(chunk['choices'][0]['delta'])

OpenAI Proxy Server (Docs)

Create an OpenAI API compatible server to call any non-openai model (e.g. Huggingface, TogetherAI, Ollama, etc.)

This works for async + streaming as well.

litellm --model <model_name>

#INFO: litellm proxy running on http://0.0.0.0:8000

Running your model locally or on a custom endpoint ? Set the --api-base parameter see how

Self-host server (Docs)

  1. Clone the repo
git clone https://github.com/BerriAI/litellm.git
  1. Modify template_secrets.toml
[keys]
OPENAI_API_KEY="sk-..."

[general]
default_model = "gpt-3.5-turbo"
  1. Deploy
docker build -t litellm . && docker run -p 8000:8000 litellm

Supported Provider (Docs)

Provider Completion Streaming Async Completion Async Streaming
openai โœ… โœ… โœ… โœ…
cohere โœ… โœ… โœ… โœ…
anthropic โœ… โœ… โœ… โœ…
replicate โœ… โœ… โœ… โœ…
huggingface โœ… โœ… โœ… โœ…
together_ai โœ… โœ… โœ… โœ…
openrouter โœ… โœ… โœ… โœ…
vertex_ai โœ… โœ… โœ… โœ…
palm โœ… โœ… โœ… โœ…
ai21 โœ… โœ… โœ… โœ…
baseten โœ… โœ… โœ… โœ…
azure โœ… โœ… โœ… โœ…
sagemaker โœ… โœ… โœ… โœ…
bedrock โœ… โœ… โœ… โœ…
vllm โœ… โœ… โœ… โœ…
nlp_cloud โœ… โœ… โœ… โœ…
aleph alpha โœ… โœ… โœ… โœ…
petals โœ… โœ… โœ… โœ…
ollama โœ… โœ… โœ… โœ…
deepinfra โœ… โœ… โœ… โœ…

Read the Docs

Logging Observability - Log LLM Input/Output (Docs)

LiteLLM exposes pre defined callbacks to send data to LLMonitor, Langfuse, Helicone, Promptlayer, Traceloop, Slack

from litellm import completion

## set env variables for logging tools
os.environ["PROMPTLAYER_API_KEY"] = "your-promptlayer-key"
os.environ["LLMONITOR_APP_ID"] = "your-llmonitor-app-id"

os.environ["OPENAI_API_KEY"]

# set callbacks
litellm.success_callback = ["promptlayer", "llmonitor"] # log input/output to promptlayer, llmonitor, supabase

#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi ๐Ÿ‘‹ - i'm openai"}])

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
pytest .

Step 4: Submit a PR with your changes! ๐Ÿš€

  • push your fork to your GitHub repo
  • submit a PR from there

Support / talk with founders

Why did we build this

  • Need for simplicity: Our code started to get extremely complicated managing & translating calls between Azure, OpenAI and Cohere.

Contributors

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

litellm-0.11.1.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

litellm-0.11.1-py3-none-any.whl (1.4 MB view details)

Uploaded Python 3

File details

Details for the file litellm-0.11.1.tar.gz.

File metadata

  • Download URL: litellm-0.11.1.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for litellm-0.11.1.tar.gz
Algorithm Hash digest
SHA256 eeb07db119f0028a925736a842cb4edf74afb9d91d924d1a48aa7fd1fe33fa82
MD5 ab9f9e5f1ab442866df8443383c6a73c
BLAKE2b-256 a9ba521244d772dbb95f1bf882ea47cb0fedcfedd2ef5449a6489a6f064208c6

See more details on using hashes here.

File details

Details for the file litellm-0.11.1-py3-none-any.whl.

File metadata

  • Download URL: litellm-0.11.1-py3-none-any.whl
  • Upload date:
  • Size: 1.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.18

File hashes

Hashes for litellm-0.11.1-py3-none-any.whl
Algorithm Hash digest
SHA256 eff68b7e1fdd4451673a6634ca949870fea7e7d3835ce2bc98185a35fcf40690
MD5 fe039417f72d41ed35ab2cebf0390fa0
BLAKE2b-256 c2d6dd0a4a80814889850642db3cf6f1ca5e03611fa666b1a9af02a4d6403a0f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page