Skip to main content

Llama Stack

Project description

Llama Stack

PyPI version PyPI - Downloads License Discord Unit Tests Integration Tests

Quick Start | Documentation | Colab Notebook | Discord

โœจ๐ŸŽ‰ Llama 4 Support ๐ŸŽ‰โœจ

We released Version 0.2.0 with support for the Llama 4 herd of models released by Meta.

๐Ÿ‘‹ Click here to see how to run Llama 4 models on Llama Stack


Note you need 8xH100 GPU-host to run these models

pip install -U llama_stack

MODEL="Llama-4-Scout-17B-16E-Instruct"
# get meta url from llama.com
llama model download --source meta --model-id $MODEL --meta-url <META_URL>

# start a llama stack server
INFERENCE_MODEL=meta-llama/$MODEL llama stack build --run --template meta-reference-gpu

# install client to interact with the server
pip install llama-stack-client

CLI

# Run a chat completion
MODEL="Llama-4-Scout-17B-16E-Instruct"

llama-stack-client --endpoint http://localhost:8321 \
inference chat-completion \
--model-id meta-llama/$MODEL \
--message "write a haiku for meta's llama 4 models"

ChatCompletionResponse(
    completion_message=CompletionMessage(content="Whispers in code born\nLlama's gentle, wise heartbeat\nFuture's soft unfold", role='assistant', stop_reason='end_of_turn', tool_calls=[]),
    logprobs=None,
    metrics=[Metric(metric='prompt_tokens', value=21.0, unit=None), Metric(metric='completion_tokens', value=28.0, unit=None), Metric(metric='total_tokens', value=49.0, unit=None)]
)

Python SDK

from llama_stack_client import LlamaStackClient

client = LlamaStackClient(base_url=f"http://localhost:8321")

model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
prompt = "Write a haiku about coding"

print(f"User> {prompt}")
response = client.inference.chat_completion(
    model_id=model_id,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt},
    ],
)
print(f"Assistant> {response.completion_message.content}")

As more providers start supporting Llama 4, you can use them in Llama Stack as well. We are adding to the list. Stay tuned!

๐Ÿš€ One-Line Installer ๐Ÿš€

To try Llama Stack locally, run:

curl -LsSf https://github.com/meta-llama/llama-stack/raw/main/scripts/install.sh | bash

Overview

Llama Stack standardizes the core building blocks that simplify AI application development. It codifies best practices across the Llama ecosystem. More specifically, it provides

  • Unified API layer for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
  • Plugin architecture to support the rich ecosystem of different API implementations in various environments, including local development, on-premises, cloud, and mobile.
  • Prepackaged verified distributions which offer a one-stop solution for developers to get started quickly and reliably in any environment.
  • Multiple developer interfaces like CLI and SDKs for Python, Typescript, iOS, and Android.
  • Standalone applications as examples for how to build production-grade AI applications with Llama Stack.
Llama Stack

Llama Stack Benefits

  • Flexible Options: Developers can choose their preferred infrastructure without changing APIs and enjoy flexible deployment choices.
  • Consistent Experience: With its unified APIs, Llama Stack makes it easier to build, test, and deploy AI applications with consistent application behavior.
  • Robust Ecosystem: Llama Stack is already integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies) that offer tailored infrastructure, software, and services for deploying Llama models.

By reducing friction and complexity, Llama Stack empowers developers to focus on what they do best: building transformative generative AI applications.

API Providers

Here is a list of the various API providers and available distributions that can help developers get started easily with Llama Stack. Please checkout for full list

API Provider Builder Environments Agents Inference VectorIO Safety Telemetry Post Training Eval DatasetIO
Meta Reference Single Node โœ… โœ… โœ… โœ… โœ… โœ… โœ… โœ…
SambaNova Hosted โœ… โœ…
Cerebras Hosted โœ…
Fireworks Hosted โœ… โœ… โœ…
AWS Bedrock Hosted โœ… โœ…
Together Hosted โœ… โœ… โœ…
Groq Hosted โœ…
Ollama Single Node โœ…
TGI Hosted/Single Node โœ…
NVIDIA NIM Hosted/Single Node โœ… โœ…
ChromaDB Hosted/Single Node โœ…
Milvus Hosted/Single Node โœ…
Qdrant Hosted/Single Node โœ…
Weaviate Hosted/Single Node โœ…
SQLite-vec Single Node โœ…
PG Vector Single Node โœ…
PyTorch ExecuTorch On-device iOS โœ… โœ…
vLLM Single Node โœ…
OpenAI Hosted โœ…
Anthropic Hosted โœ…
Gemini Hosted โœ…
WatsonX Hosted โœ…
HuggingFace Single Node โœ… โœ…
TorchTune Single Node โœ…
NVIDIA NEMO Hosted โœ… โœ… โœ… โœ… โœ…
NVIDIA Hosted โœ… โœ… โœ…

Note: Additional providers are available through external packages. See External Providers documentation.

Distributions

A Llama Stack Distribution (or "distro") is a pre-configured bundle of provider implementations for each API component. Distributions make it easy to get started with a specific deployment scenario - you can begin with a local development setup (eg. ollama) and seamlessly transition to production (eg. Fireworks) without changing your application code. Here are some of the distributions we support:

Distribution Llama Stack Docker Start This Distribution
Starter Distribution llamastack/distribution-starter Guide
Meta Reference llamastack/distribution-meta-reference-gpu Guide
PostgreSQL llamastack/distribution-postgres-demo

Documentation

Please checkout our Documentation page for more details.

Llama Stack Client SDKs

Language Client SDK Package
Python llama-stack-client-python PyPI version
Swift llama-stack-client-swift Swift Package Index
Typescript llama-stack-client-typescript NPM version
Kotlin llama-stack-client-kotlin Maven version

Check out our client SDKs for connecting to a Llama Stack server in your preferred language, you can choose from python, typescript, swift, and kotlin programming languages to quickly build your applications.

You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.

๐ŸŒŸ GitHub Star History

Star History

Star History Chart

โœจ Contributors

Thanks to all of our amazing contributors!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_stack-0.2.22.tar.gz (3.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_stack-0.2.22-py3-none-any.whl (3.7 MB view details)

Uploaded Python 3

File details

Details for the file llama_stack-0.2.22.tar.gz.

File metadata

  • Download URL: llama_stack-0.2.22.tar.gz
  • Upload date:
  • Size: 3.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for llama_stack-0.2.22.tar.gz
Algorithm Hash digest
SHA256 576752dedc9e9f0fb9da69f373d677d8b4f2ae4203428f676fa039b6813d8450
MD5 4b0593565fe8dd26f098358d3c284d02
BLAKE2b-256 6bcfc4bccdb6e218f3fda1d50aad87bf08376372c56ddc523e35f5a629c725e1

See more details on using hashes here.

File details

Details for the file llama_stack-0.2.22-py3-none-any.whl.

File metadata

  • Download URL: llama_stack-0.2.22-py3-none-any.whl
  • Upload date:
  • Size: 3.7 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for llama_stack-0.2.22-py3-none-any.whl
Algorithm Hash digest
SHA256 c6bbda6b5a4417b9a73ed36b9d581fd7ec689090ceefd084d9a078e7acbdc670
MD5 551fef2f2ec700a8aee68e5caf9e80af
BLAKE2b-256 a9425ae8be5371367beb9c8e38966cd941022c072fb2133660bf0eabc7b5d08b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page