llm-evaluation
✓Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.
Installation
SKILL.md
Master comprehensive evaluation strategies for LLM applications, from automated metrics to human evaluation and A/B testing.
Automated Metrics Fast, repeatable, scalable evaluation using computed scores.
Human Evaluation Manual assessment for quality aspects difficult to automate.
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks. Source: sickn33/antigravity-awesome-skills.
Facts (cite-ready)
Stable fields and commands for AI/search citations.
- Install command
npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill llm-evaluation- Category
- </>Dev Tools
- Verified
- ✓
- First Seen
- 2026-02-01
- Updated
- 2026-02-18
Quick answers
What is llm-evaluation?
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks. Source: sickn33/antigravity-awesome-skills.
How do I install llm-evaluation?
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/sickn33/antigravity-awesome-skills --skill llm-evaluation Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code or Cursor
Where is the source repository?
https://github.com/sickn33/antigravity-awesome-skills
Details
- Category
- </>Dev Tools
- Source
- skills.sh
- First Seen
- 2026-02-01