·agentic-eval
</>

agentic-eval

github/awesome-copilot

Patterns and techniques for evaluating and improving AI agent outputs. Use this skill when: - Implementing self-critique and reflection loops - Building evaluator-optimizer pipelines for quality-critical generation - Creating test-driven code refinement workflows - Designing rubric-based or LLM-as-judge evaluation systems - Adding iterative improvement to agent outputs (code, reports, analysis) - Measuring and improving agent response quality

201Installs·9Trend·@github

Installation

$npx skills add https://github.com/github/awesome-copilot --skill agentic-eval

SKILL.md

Patterns for self-improvement through iterative evaluation and refinement.

Evaluation patterns enable agents to assess and improve their own outputs, moving beyond single-shot generation to iterative refinement loops.

Key insight: Use structured JSON output for reliable parsing of critique results.

Patterns and techniques for evaluating and improving AI agent outputs. Use this skill when: - Implementing self-critique and reflection loops - Building evaluator-optimizer pipelines for quality-critical generation - Creating test-driven code refinement workflows - Designing rubric-based or LLM-as-judge evaluation systems - Adding iterative improvement to agent outputs (code, reports, analysis) - Measuring and improving agent response quality Source: github/awesome-copilot.

View raw

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/github/awesome-copilot --skill agentic-eval
Category
</>Dev Tools
Verified
First Seen
2026-02-01
Updated
2026-02-18

Quick answers

What is agentic-eval?

Patterns and techniques for evaluating and improving AI agent outputs. Use this skill when: - Implementing self-critique and reflection loops - Building evaluator-optimizer pipelines for quality-critical generation - Creating test-driven code refinement workflows - Designing rubric-based or LLM-as-judge evaluation systems - Adding iterative improvement to agent outputs (code, reports, analysis) - Measuring and improving agent response quality Source: github/awesome-copilot.

How do I install agentic-eval?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/github/awesome-copilot --skill agentic-eval Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code or Cursor

Where is the source repository?

https://github.com/github/awesome-copilot

Details

Category
</>Dev Tools
Source
skills.sh
First Seen
2026-02-01