·vllm-ascend

vLLM Ascend plugin for LLM inference serving on Huawei Ascend NPU. Use for offline batch inference, API server deployment, quantization inference (with msmodelslim quantized models), tensor/pipeline parallelism for distributed serving, and OpenAI-compatible API endpoints. Supports Qwen, DeepSeek, GLM, LLaMA models with Ascend-optimized kernels.

15Installs·1Trend·@ascend-ai-coding

Installation

$npx skills add https://github.com/ascend-ai-coding/awesome-ascend-skills --skill vllm-ascend

How to Install vllm-ascend

Quickly install vllm-ascend AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/ascend-ai-coding/awesome-ascend-skills --skill vllm-ascend
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: ascend-ai-coding/awesome-ascend-skills.

SKILL.md

View raw

vLLM-Ascend is a plugin for vLLM that enables efficient LLM inference on Huawei Ascend AI processors. It provides Ascend-optimized kernels, quantization support, and distributed inference capabilities.

vLLM-Ascend supports models quantized with msModelSlim. For quantization details, see msmodelslim.

| Parameter | Default | Description | Tuning Advice |

vLLM Ascend plugin for LLM inference serving on Huawei Ascend NPU. Use for offline batch inference, API server deployment, quantization inference (with msmodelslim quantized models), tensor/pipeline parallelism for distributed serving, and OpenAI-compatible API endpoints. Supports Qwen, DeepSeek, GLM, LLaMA models with Ascend-optimized kernels. Source: ascend-ai-coding/awesome-ascend-skills.

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/ascend-ai-coding/awesome-ascend-skills --skill vllm-ascend
Category
</>Dev Tools
Verified
First Seen
2026-03-09
Updated
2026-03-10

Browse more skills from ascend-ai-coding/awesome-ascend-skills

Quick answers

What is vllm-ascend?

vLLM Ascend plugin for LLM inference serving on Huawei Ascend NPU. Use for offline batch inference, API server deployment, quantization inference (with msmodelslim quantized models), tensor/pipeline parallelism for distributed serving, and OpenAI-compatible API endpoints. Supports Qwen, DeepSeek, GLM, LLaMA models with Ascend-optimized kernels. Source: ascend-ai-coding/awesome-ascend-skills.

How do I install vllm-ascend?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/ascend-ai-coding/awesome-ascend-skills --skill vllm-ascend Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/ascend-ai-coding/awesome-ascend-skills