·vllm-deployment
</>

vllm-deployment

Deploy vLLM for high-performance LLM inference. Covers Docker CPU/GPU deployments and cloud VM provisioning with OpenAI-compatible API endpoints.

4Installs·0Trend·@stakpak

Installation

$npx skills add https://github.com/stakpak/community-paks --skill vllm-deployment

How to Install vllm-deployment

Quickly install vllm-deployment AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/stakpak/community-paks --skill vllm-deployment
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: stakpak/community-paks.

SKILL.md

View raw

| CPU | 2x model size | 4x model size | | GPU | Model size + 2GB | Model size + 4GB VRAM |

| VLLMCPUKVCACHESPACE | KV cache size in GB (CPU) | 4 | | VLLMCPUOMPTHREADSBIND | CPU core binding (CPU) | 0-7 | | CUDAVISIBLEDEVICES | GPU device selection | 0,1 | | HFTOKEN | HuggingFace authentication | hfxxx |

| --shm-size=4g | Shared memory for IPC | | --cap-add SYSNICE | NUMA optimization (CPU) | | --security-opt seccomp=unconfined | Memory policy syscalls (CPU) | | --gpus all | GPU access | | -p 8000:8000 | Port mapping |

Deploy vLLM for high-performance LLM inference. Covers Docker CPU/GPU deployments and cloud VM provisioning with OpenAI-compatible API endpoints. Source: stakpak/community-paks.

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/stakpak/community-paks --skill vllm-deployment
Category
</>Dev Tools
Verified
First Seen
2026-02-26
Updated
2026-03-10

Browse more skills from stakpak/community-paks

Quick answers

What is vllm-deployment?

Deploy vLLM for high-performance LLM inference. Covers Docker CPU/GPU deployments and cloud VM provisioning with OpenAI-compatible API endpoints. Source: stakpak/community-paks.

How do I install vllm-deployment?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/stakpak/community-paks --skill vllm-deployment Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/stakpak/community-paks

Details

Category
</>Dev Tools
Source
skills.sh
First Seen
2026-02-26