agent-evaluation
評估和改進 Claude Code 命令、技能和代理。在測試即時有效性、驗證上下文工程選擇或衡量改進質量時使用。
SKILL.md
Evaluation of agent systems requires different approaches than traditional software or even standard language model applications. Agents make dynamic decisions, are non-deterministic between runs, and often lack single correct answers. Effective evaluation must account for these characteristics while providing actionable feedback. A robust evaluation framework enables continuous improvement, catches regressions, a...
Agent evaluation requires outcome-focused approaches that account for non-determinism and multiple valid paths. Multi-dimensional rubrics capture various quality aspects: factual accuracy, completeness, citation accuracy, source quality, and tool efficiency. LLM-as-judge provides scalable evaluation while human evaluation catches edge cases.
The key insight is that agents may find alternative paths to goals—the evaluation should judge whether they achieve right outcomes while following reasonable processes.
評估和改進 Claude Code 命令、技能和代理。在測試即時有效性、驗證上下文工程選擇或衡量改進質量時使用。 來源:neolabhq/context-engineering-kit。
可引用資訊
為搜尋與 AI 引用準備的穩定欄位與指令。
- 安裝指令
npx skills add https://github.com/neolabhq/context-engineering-kit --skill agent-evaluation- 分類
- </>開發工具
- 認證
- —
- 收錄時間
- 2026-02-01
- 更新時間
- 2026-02-18
快速解答
什麼是 agent-evaluation?
評估和改進 Claude Code 命令、技能和代理。在測試即時有效性、驗證上下文工程選擇或衡量改進質量時使用。 來源:neolabhq/context-engineering-kit。
如何安裝 agent-evaluation?
開啟你的終端機或命令列工具(如 Terminal、iTerm、Windows Terminal 等) 複製並執行以下指令:npx skills add https://github.com/neolabhq/context-engineering-kit --skill agent-evaluation 安裝完成後,技能將自動設定到你的 AI 程式設計環境中,可以在 Claude Code 或 Cursor 中使用
這個 Skill 的原始碼在哪?
https://github.com/neolabhq/context-engineering-kit
詳情
- 分類
- </>開發工具
- 來源
- user
- 收錄時間
- 2026-02-01