·vllm-deploy-docker
</>

vllm-deploy-docker

Deploy vLLM using Docker (pre-built images or build-from-source) with NVIDIA GPU support and run the OpenAI-compatible server.

0Installs·0Trend·@vllm-project

Installation

$npx skills add https://github.com/vllm-project/vllm-skills --skill vllm-deploy-docker

How to Install vllm-deploy-docker

Quickly install vllm-deploy-docker AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/vllm-project/vllm-skills --skill vllm-deploy-docker
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: vllm-project/vllm-skills.

SKILL.md

View raw

A Claude skill describing how to deploy vLLM with Docker using the official pre-built images or building the image from source supporting NVIDIA GPUs with CUDA. Instructions include NVIDIA CUDA support, example docker run and a minimal docker-compose snippet, recommended flags, and troubleshooting notes. For AMD, Intel, or other accelerators, please refer to the vLLM documentation for alternative deployment methods.

Run a vLLM OpenAI-compatible server with GPU access, mounting the HF cache and forwarding port 8000:

Note: vLLM and this skill recommend using the latest Docker image (vllm/vllm-openai:latest). For legacy version images, you may refer to the Docker Hub image tags.

Deploy vLLM using Docker (pre-built images or build-from-source) with NVIDIA GPU support and run the OpenAI-compatible server. Source: vllm-project/vllm-skills.

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/vllm-project/vllm-skills --skill vllm-deploy-docker
Category
</>Dev Tools
Verified
First Seen
2026-03-12
Updated
2026-03-13

Browse more skills from vllm-project/vllm-skills

Quick answers

What is vllm-deploy-docker?

Deploy vLLM using Docker (pre-built images or build-from-source) with NVIDIA GPU support and run the OpenAI-compatible server. Source: vllm-project/vllm-skills.

How do I install vllm-deploy-docker?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/vllm-project/vllm-skills --skill vllm-deploy-docker Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/vllm-project/vllm-skills

Details

Category
</>Dev Tools
Source
user
First Seen
2026-03-12