·llm-api-benchmark
</>

llm-api-benchmark

This skill should be used when the user asks to "benchmark LLM API", "test API speed", "measure response time", "check API latency", "test TPS", "benchmark endpoint", "compare endpoint performance", "测试端点性能", "基准测试", "测试LLM速度", or needs to evaluate LLM API performance metrics (TTFT, TPS, latency).

14Installs·0Trend·@ridewind

Installation

$npx skills add https://github.com/ridewind/my-skills --skill llm-api-benchmark

How to Install llm-api-benchmark

Quickly install llm-api-benchmark AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/ridewind/my-skills --skill llm-api-benchmark
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: ridewind/my-skills.

SKILL.md

View raw

| quick | 快速测试 | 10 tokens | | standard | 中等长度 | 20 tokens | | long | 长输出测试 | 100+ tokens | | throughput | 高吞吐测试 | 300-500 tokens | | code | 代码生成(默认) | 500-1000 tokens | | json | JSON 输出测试 | 30 tokens |

| Task Type | Description | Target Output | Use Case |

| counting | 数字序列生成 | 50 tokens | TTFT 测试(最稳定) | | structured-list | 结构化列表 | 100-150 tokens | 中等负载测试 | | code-review | 代码审查报告 | 300-400 tokens | 代码工作负载 | | implementation | 完整代码实现(默认) | 500-700 tokens | 吞吐量测试 | | comprehensive | 综合分析报告 | 800-1000 tokens | 真实工作负载 |

This skill should be used when the user asks to "benchmark LLM API", "test API speed", "measure response time", "check API latency", "test TPS", "benchmark endpoint", "compare endpoint performance", "测试端点性能", "基准测试", "测试LLM速度", or needs to evaluate LLM API performance metrics (TTFT, TPS, latency). Source: ridewind/my-skills.

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/ridewind/my-skills --skill llm-api-benchmark
Category
</>Dev Tools
Verified
First Seen
2026-03-07
Updated
2026-03-10

Browse more skills from ridewind/my-skills

Quick answers

What is llm-api-benchmark?

This skill should be used when the user asks to "benchmark LLM API", "test API speed", "measure response time", "check API latency", "test TPS", "benchmark endpoint", "compare endpoint performance", "测试端点性能", "基准测试", "测试LLM速度", or needs to evaluate LLM API performance metrics (TTFT, TPS, latency). Source: ridewind/my-skills.

How do I install llm-api-benchmark?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/ridewind/my-skills --skill llm-api-benchmark Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/ridewind/my-skills

Details

Category
</>Dev Tools
Source
skills.sh
First Seen
2026-03-07