evaluation
✓建立代理系统的评估框架。在测试代理性能、验证上下文工程选择或衡量一段时间内的改进时使用。
SKILL.md
Evaluation of agent systems requires different approaches than traditional software or even standard language model applications. Agents make dynamic decisions, are non-deterministic between runs, and often lack single correct answers. Effective evaluation must account for these characteristics while providing actionable feedback. A robust evaluation framework enables continuous improvement, catches regressions, a...
Agent evaluation requires outcome-focused approaches that account for non-determinism and multiple valid paths. Multi-dimensional rubrics capture various quality aspects: factual accuracy, completeness, citation accuracy, source quality, and tool efficiency. LLM-as-judge provides scalable evaluation while human evaluation catches edge cases.
The key insight is that agents may find alternative paths to goals—the evaluation should judge whether they achieve right outcomes while following reasonable processes.
建立代理系统的评估框架。在测试代理性能、验证上下文工程选择或衡量一段时间内的改进时使用。 来源:mjunaidca/mjs-agent-skills。
可引用信息
为搜索与 AI 引用准备的稳定字段与命令。
- 安装命令
npx skills add https://github.com/mjunaidca/mjs-agent-skills --skill evaluation- 分类
- </>开发工具
- 认证
- ✓
- 收录时间
- 2026-02-01
- 更新时间
- 2026-02-18
快速解答
什么是 evaluation?
建立代理系统的评估框架。在测试代理性能、验证上下文工程选择或衡量一段时间内的改进时使用。 来源:mjunaidca/mjs-agent-skills。
如何安装 evaluation?
打开你的终端或命令行工具(如 Terminal、iTerm、Windows Terminal 等) 复制并运行以下命令:npx skills add https://github.com/mjunaidca/mjs-agent-skills --skill evaluation 安装完成后,技能将自动配置到你的 AI 编程环境中,可以在 Claude Code 或 Cursor 中使用
这个 Skill 的源码在哪?
https://github.com/mjunaidca/mjs-agent-skills
详情
- 分类
- </>开发工具
- 来源
- skills.sh
- 收录时间
- 2026-02-01