llm_evaluation
✓Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks.
Installation
SKILL.md
Master comprehensive evaluation strategies for LLM applications, from automated metrics to human evaluation and A/B testing.
Automated Metrics Fast, repeatable, scalable evaluation using computed scores.
Human Evaluation Manual assessment for quality aspects difficult to automate.
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks. Source: vuralserhat86/antigravity-agentic-skills.
Facts (cite-ready)
Stable fields and commands for AI/search citations.
- Install command
npx skills add https://github.com/vuralserhat86/antigravity-agentic-skills --skill llm_evaluation- Category
- </>Dev Tools
- Verified
- ✓
- First Seen
- 2026-02-01
- Updated
- 2026-02-18
Quick answers
What is llm_evaluation?
Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performance, measuring AI application quality, or establishing evaluation frameworks. Source: vuralserhat86/antigravity-agentic-skills.
How do I install llm_evaluation?
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/vuralserhat86/antigravity-agentic-skills --skill llm_evaluation Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code or Cursor
Where is the source repository?
https://github.com/vuralserhat86/antigravity-agentic-skills
Details
- Category
- </>Dev Tools
- Source
- skills.sh
- First Seen
- 2026-02-01