Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 54.2k 9.2k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 1.7k 191

  3. recipes recipes Public

    Common recipes to run vLLM

    82 12

Repositories

Showing 10 of 20 repositories
  • guidellm Public

    Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs

    vllm-project/guidellm’s past year of commit activity
    Python 467 Apache-2.0 64 54 (4 issues need help) 18 Updated Aug 6, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 85 BSD-3-Clause 1,881 0 12 Updated Aug 6, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 1,748 Apache-2.0 191 36 (7 issues need help) 30 Updated Aug 6, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 30 Apache-2.0 18 11 10 Updated Aug 6, 2025
  • vllm-gaudi Public

    Community maintained hardware plugin for vLLM on Intel Gaudi

    vllm-project/vllm-gaudi’s past year of commit activity
    Python 8 11 0 11 Updated Aug 6, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    Python 959 Apache-2.0 304 273 (5 issues need help) 166 Updated Aug 6, 2025
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 54,183 Apache-2.0 9,171 1,837 (15 issues need help) 927 Updated Aug 6, 2025
  • recipes Public

    Common recipes to run vLLM

    vllm-project/recipes’s past year of commit activity
    82 Apache-2.0 12 1 1 Updated Aug 6, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Go 3,991 Apache-2.0 413 192 (19 issues need help) 19 Updated Aug 6, 2025
  • production-stack Public

    vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization

    vllm-project/production-stack’s past year of commit activity
    Python 1,622 Apache-2.0 249 65 (4 issues need help) 41 Updated Aug 6, 2025