Skip to main content

A high-throughput and memory-efficient inference and serving engine for LLMs

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |

🔥 We have built a vLLM website to help you get started with vLLM. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us.


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has grown into one of the most active open-source AI projects built and maintained by a diverse community of many dozens of academic institutions and companies from over 2000 contributors.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests, chunked prefill, prefix caching
  • Fast and flexible model execution with piecewise and full CUDA/HIP graphs
  • Quantization: FP8, MXFP8/MXFP4, NVFP4, INT8, INT4, GPTQ/AWQ, GGUF, compressed-tensors, ModelOpt, TorchAO, and more
  • Optimized attention kernels including FlashAttention, FlashInfer, TRTLLM-GEN, FlashMLA, and Triton
  • Optimized GEMM/MoE kernels for various precisions using CUTLASS, TRTLLM-GEN, CuTeDSL
  • Speculative decoding including n-gram, suffix, EAGLE, DFlash
  • Automatic kernel generation and graph-level transformations using torch.compile
  • Disaggregated prefill, decode, and encode

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data, expert, and context parallelism for distributed inference
  • Streaming outputs
  • Generation of structured outputs using xgrammar or guidance
  • Tool calling and reasoning parsers
  • OpenAI-compatible API server, plus Anthropic Messages API and gRPC support
  • Efficient multi-LoRA support for dense and MoE layers
  • Support for NVIDIA GPUs, AMD GPUs, and x86/ARM/PowerPC CPUs. Additionally, diverse hardware plugins such as Google TPUs, Intel Gaudi, IBM Spyre, Huawei Ascend, Rebellions NPU, Apple Silicon, MetaX GPU, and more.

vLLM seamlessly supports 200+ model architectures on Hugging Face, including:

  • Decoder-only LLMs (e.g., Llama, Qwen, Gemma)
  • Mixture-of-Expert LLMs (e.g., Mixtral, DeepSeek-V3, Qwen-MoE, GPT-OSS)
  • Hybrid attention and state-space models (e.g., Mamba, Qwen3.5)
  • Multi-modal models (e.g., LLaVA, Qwen-VL, Pixtral)
  • Embedding and retrieval models (e.g., E5-Mistral, GTE, ColBERT)
  • Reward and classification models (e.g., Qwen-Math)

Find the full list of supported models here.

Getting Started

Install vLLM with uv (recommended) or pip:

uv pip install vllm

Or build from source for development.

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use GitHub Issues
  • For discussing with fellow users, please use the vLLM Forum
  • For coordinating contributions and development, please use Slack
  • For security disclosures, please use GitHub's Security Advisories feature
  • For collaborations and partnerships, please contact us at collaboration@vllm.ai

Media Kit

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllm-0.20.2.tar.gz (33.5 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm-0.20.2-cp38-abi3-manylinux_2_35_x86_64.whl (244.4 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

vllm-0.20.2-cp38-abi3-manylinux_2_35_aarch64.whl (235.8 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ ARM64

File details

Details for the file vllm-0.20.2.tar.gz.

File metadata

  • Download URL: vllm-0.20.2.tar.gz
  • Upload date:
  • Size: 33.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for vllm-0.20.2.tar.gz
Algorithm Hash digest
SHA256 58809377798c5335c6e2fe30092abda54d9200b5b8a717b3735a63f5daa0e383
MD5 28b7ae0466b5a8f403c3d32661989efe
BLAKE2b-256 cd234a5dc23600be07407835e404d06a2fcd587c8dcaebeab314b5c3593509ce

See more details on using hashes here.

File details

Details for the file vllm-0.20.2-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for vllm-0.20.2-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 22a7dd06eb03371298e13d6100f3dedbf307352342aaf08e87c929c60aae9b4d
MD5 34765d855502c6545a79ab5f247eaff1
BLAKE2b-256 c5aa4488d49c481a2184e6e285b8d3f937905205f52cd5ac30fb348770494b6e

See more details on using hashes here.

File details

Details for the file vllm-0.20.2-cp38-abi3-manylinux_2_35_aarch64.whl.

File metadata

File hashes

Hashes for vllm-0.20.2-cp38-abi3-manylinux_2_35_aarch64.whl
Algorithm Hash digest
SHA256 76ccf4c0554556c06f6b0fb1643742d4cf97dcc69f6ef3f04556d0764126035a
MD5 d48f48925056a0b6a5d844e94a87d34f
BLAKE2b-256 19d97bef6cf9d7508b4313237602e96e10054c7491f12f48403813c9c2d6f6f1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page